16

I'm curious. I keep reading about how our ISPs and internet middle men record and keep track of all DNS requests basically leaving a trail of breadcrumbs in many logs, and also allowing DNS hijacking for advertising purposes(I'm looking at you Cox Communications!).

Regardless of other methods for privacy/security, I'd specifically like to know if it's possible to run a DNS server on your own local network, _that actually has the zone information of the root DNS servers(for .com,.net,.org) domains.

I know you can setup DNS that basically just maps machines in your domain, but is it possible to basically request a copy/transfer of the root DNS information to be stored on your own DNS server so you can bypass going out to the internet for DNS information at all for web browsing?

I hope I'm being clear. I do not want to my DNS server to only have information about my internal network -- I want it to have duplicate information that the big internet DNS servers have, but I'd like that information locally on my DNS server.

Is there something like BGP Zone transfers but for DNS?

Update: Are there any products/OSS software that could basically "scrape" this information from the external DNS chain into the local cache in large quantities so they're ready when you need them, versus caching them when you explicitly request the domain records?

pythonnewbie
  • 163
  • 1
  • 1
  • 5

7 Answers7

15

DNS by design does not enable having an authoritative copy of all zones, as it utilizes a hierarchical naming system.

The root servers are authoritative for identifying the server responsible for the Top Level Domain (TLD) in question. For example, resolving www.example.net will first query a root server to identify the authoritative nameserver for .net. The .net nameserver will identify the authoritative nameserver for example.net, which will then return the record for www.example.net.

You cannot download a copy of all zones. However, you can run a local caching nameserver. The caching nameserver will provide a local copy of all records resolved, which expire using the Time To Live (TTL) specified for the record. Please keep in mind that my explanation is a simplistic description of the DNS protocol, which can be explored in detail by reading definitions in the Request For Comments.

While NXDOMAIN hijacking can be avoided by running a local cache, keep in mind that all DNS resolution traffic will still be transmitted via your Internet connection unencrypted. Your ISP could potentially monitor that traffic and still see the communication. The contracts you have with your ISP as well as your local laws are going to be your definitive means for establishing how your communications are treated. Your ISP's contracts will include the Terms of Service, Privacy Policies and any additional contracts that you may have with your ISP.

Using encrypted protocols is one of the best methods for insuring your data against eavesdropping during transit. However, even that has no guarantee of anonymity. There are additional protocols out there such as Tor and Freenet, which attempt to introduce anonymity to the Internet, as it was never designed to be truly anonymous.

Warner
  • 23,440
  • 2
  • 57
  • 69
  • 1
    The Simple answer is no, you can't. The technical answer is above in Warner's response. There isn't one set of servers that contain all the DNS info.. the root servers simply refer your to one of the TLD servers which refers the request further down the line. – Rex Sep 07 '10 at 21:08
  • 1
    Some ISPs provide a way to turn off NXDOMAIN hijacking. Some ISPs provide a (stupid and fake) cookie-based mechanism to "turn off" NXDOMAIN hijacking. There are also alternative name servers that can be used instead of your ISP's name server. – Brian Sep 21 '10 at 20:36
3

A few things:

If you configure your server to use the root hints instead of using forwarders then you don't have to worry about MITM issues (at least from ISP's and DNS hijackers). For all external DNS resolution your server would query the root hints, which would refer you to the gTLD servers for the top level domain in question (.com, etc.), which would then refer you to the NS servers for the domain in question.

If you really want to create your own root server you certainly can, although I don't see how it would do you much good. Here's how you do it on a Windows DNS server:

Download the DNS root zone file and save it as root.dns in the %systemroot%\system32\dns directory on your Windows DNS server, use the DNS zone creation wizard to create a new primary forward lookup zone named "." (without the quotes), deselect the option to create an AD integrated zone, type "." for the zone name (without the quotes), select the option to use an existing file and the zone file name field will automatically be populated with the name root.dns (if it isn't type it in), leave the option to not allow dynamic updates as is, click the finish button after you've cycled through each step of the wizard. You now have a root server with zones and zone records for all of the gTLD servers.

Note that this will disable the forwarding and root hints options on the server (since your server is now a root server) and also note that if the gTLD information changes, there's no way for your server to get notice of those changes.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
  • 1
    Nothing stopping the ISP from hijacking the IP of the root servers (except DNSsec)... Other than that, correct. – Chris S Sep 08 '10 at 02:29
1

You can certainly set up your own server and make it authoritive for the root, but I don't know of any way you can prefill it with the root-servers' zonefiles. You cannot simply request a zonetransfer, so I guess you'll have to fill it by keeping your caches.

Modify the root.hints on your other nameservers to point them to your private root server, and let the testing begin.

But keep in mind that the root servers only know which servers are authoritive for the TLD's, nothing else. You'll essentially need to recreate the entire hierarchy of servers, which seems like an impossible task.

Martijn Heemels
  • 7,438
  • 6
  • 39
  • 62
1

For closely related servers there are zone transfers. These function much like BGP announcements. For security reasons, these are usually blocked for other servers.

If you run a caching name server it will copy the root server list, and very soon have the roots for .com, .net, etc. There is a very good reason that DNS is distributed. Otherwise everyone would be working with obsolete data. The size of the database would be quite large, and the majority of the data of no interest to you.

There are options to decrease the risk of DNS poisoning and good software deals with the problems as they become known. There are organization which work at providing sanitized data which can be used as upstream providers. These will filter out some poisoning attempts. Look at using OpenDNS or google as upstream providers.

The root DNS zones are now signed, and I am increasingly seeing my mail server reporting that the DNS data was signed. The signing of DNS has been reported as a requirement for IPV6. Signed DNS makes cache poisoning very difficult, but adds to the difficulty of managing DNS.

BillThor
  • 27,354
  • 3
  • 35
  • 69
0

Yes one of the feature of DNS servers is local caching of frequently requested queries, too often bypassing the specified ttl.

You can certainly run you own dns, no problem. But the root servers, and top level domain servers, you will have to ask uncle sam.

Logging all dns requests is possible, but would be insane.

  • But relying on the cache would imply that you've already had to send a DNS request out for the domain in question, which defeats the entire purpose of trying to have entirely(or mostly) local DNS requests only. – pythonnewbie Sep 07 '10 at 20:52
0

You can run your own root servers if you like, they just arent what you think they are. Check this out http://en.wikipedia.org/wiki/Alternative_DNS_root

Recursion
  • 609
  • 2
  • 7
  • 19
-2

Make a program that'll travel down links and make random or run through a list of queries for you. Log the IP addresses and domains and whatever the actual technical protocols are and you'll be able to actually move around the web to all the places your bot(s) have visited.

I know little of the literal technical aspects that would need to go into this, but having the entire hierarchy at your fingertips invites literally dissolving this out of the hands of the root servers. I would also suggest making the active growing list of it completely publicly downloadable both on the regular web and other node based systems like IPFS (Interplanetary File System) Heck if this were a project I would be more then happy to donate computation power for such an effort too.