1

I'm designing DNS service for a network and had a few architecture questions. The O'Reilly/ Cricket Liu DNS book and the NIST DNS security guide don't address these questions except in a very general way.

Here is the proposed network, which has internal (RFC 1918 space) and DMZ segments (with multiple servers, not just DNS servers) as well as mail and www servers at an outside colo. The DNS servers are the blue boxes:

Candidate DNS network

Here are the requirements:

  • DNSSEC support
  • On the internal networks, delegation to internal zones in RFC 1918 space
  • Separate authoritative and recursive servers
  • Hidden master (aka hidden primary) allows zone transfers to slaves but resolves no requests
  • All nameservers run chrooted (the default with Bind on FreeBSD)

Here are my questions:

  1. Is there anything obviously broken about this design?

  2. Are there any missing or extraneous elements here?

  3. OK to run the hidden master on the same subnet as the internal slave servers?

  4. Given relatively light DNS traffic (< 1 Mbps) on the internal and DMZ networks, are there security issues to running the caching-only servers in jails (BSD-speak for VMs) on the authoritative servers? Or should they be on dedicated machines?

Thanks in advance!

user8162
  • 270
  • 2
  • 9
  • I assume the internal clients point at the caching internal dns server? The internal caching server relies on the slaves for recursive lookup? what does the caching only server between the firewall and router do? (does it do it's own recursive lookups?) -- Similarly: the remote web and email servers point to the caching dns server which relies on the external slave for recursion? – Daniel Widrick Sep 26 '13 at 05:35
  • All authoritative servers would respond to queries about their zones only, and would not do recursion for other zones; hence separate authoritative and recursive servers. Internal clients would use the internal and DMZ caching servers, and these in turn would use forwarders at the upstream ISPs. The caching server at the colo would also use upstream forwarders (not the authoritative server) for recursion. Make sense? – user8162 Sep 26 '13 at 06:07

1 Answers1

1

Here are my questions:

1) Is there anything obviously broken about this design?

Nothing is Obviously wrong. .. At least that I can see.

2) Are there any missing or extraneous elements here?

Missing: Are you comfortable not having a hot stand by for your hidden master? The system seems quite engineered (I don't want to call it over engineered without seeing your use case) to rely on a single primary host. It's outside the scope of your diagram, but do your have a contingency plan for when [not if] the primary master blows up?

Extraneous: Keep in mind that every dns server that you add to the mix is another server that must be managed. Given your usage, is it critical to have this many DNS servers?

3) OK to run the hidden master on the same subnet as the internal slave servers?

I would expect the hidden master, and authoritative dns slaves to be in the dmz. Lock down the master appropriately. The internal slaves are answering authoritative look ups for your zone from the internet correct? If the internal slaves only answer queries for your zone from internal hosts, you either need a HUGE zone, a silly number of internal lookups to your internal zone (consider caching DNS servers at the host/workstation level), or you have given too much horse power to internal DNS. If they are answering queries from the internet, I would expect them to be in the DMZ. You are free to label them how you want.

As far as the master being on the same subnet as the slaves - Lock it down. Should not be an issue (and will save you some routing overhead come zone xfer time).

4) Given relatively light DNS traffic (< 1 Mbps) on the internal and DMZ networks, are there security issues to running the caching-only servers in jails (BSD-speak for VMs) on the authoritative servers? Or should they be on dedicated machines?

Yes. There are always security issues. If the internal caching only servers are locked down to accept only traffic from internal sources, they are placed in jails, on a presumably BSD environment, and updated and monitored regularly... A hacker has a lot of work to do to exploit the environment.

Your biggest risk (See: not a professional risk analyst) is likely the chance of a hacker, by a stroke of shear miracle, is the possibility of having one of your authoritative DNS slaves get hijacked. Likely to result in partial defacement, or if the attacker is truly brilliant, some 'poisoning' and information theft (See: SSL/TLS to put a halter on that).

Next biggest (See: not a professional risk analyst) is the corruption of the slave OS requiring re-install/restore.

Ultimately:

It's a fairly solid design, and without a view into the network (which you'll not be expected to provide us), it is quite hard to find shortcomings / faults with the design. The only thing that clearly stands out is that: there are a lot of pieces, a complex setup, and a lot of engineering here... Make sure that there is a business for it.

Ex: You could run Bind9 as an Authoritative Slave, that does recursive/forwarding lookups, and caching all in one daemon. (and saves multihoming / port forwarding / other networking magic to get two DNS daemons answering on the same box).

Daniel Widrick
  • 3,418
  • 2
  • 12
  • 26
  • 2a. Good idea about a hot standby for the hidden master. I haven't done this before; is it really as simple as keeping the zone, key and config files updated with rsync or whatever, and keeping the host OS updated? – user8162 Sep 26 '13 at 17:14
  • 1
    I wouldn't be as bold to say "as simple as" but once setup properly the DNS servers should be stable and keep themselves in sync. If you want to have rotating keys, rysnc might be a good option (You should rotate the keys if you get compromised) but it may not be a necessity depending on your security domain. – Daniel Widrick Sep 26 '13 at 17:24
  • 2b."is it critical to have this many DNS servers?" I could eliminate a lot of servers by (a) not running a hidden master and (b) configuring authoritative servers to handle recursion from internal and DMZ clients. However, neither is a security best practice. See [section 3.2.2 of the NIST DNS security guide](http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-81-2.pdf) and [pages 28 and 94 of the Michael W. Lucas guide to DNSSEC](https://www.michaelwlucas.com/nonfiction/dnssec-mastery). This is why I spec'd separate authoritative and recursive servers as a requirement. – user8162 Sep 26 '13 at 17:25
  • 1
    The question you'll have to answer is: Do you have a business **need** for the complexity? Just blindly following a "Best Practice" can lead to over engineering. In a small shop <50 employees... Building a PCI Compliant solution is often much more expensive than just outsourcing purchasing to a vendor. – Daniel Widrick Sep 26 '13 at 17:27
  • Yes, we need to run DNNSEC. As described in the Lucas book and elsewhere, an authoritative server never returns authenticated data. Thus, internal clients doing DNSSEC validation would need to query a recursive server. – user8162 Sep 26 '13 at 17:36
  • 3. "I would expect the hidden master, and authoritative dns slaves to be in the dmz." I'm a little shaky on this. Currently, the internal authoritative servers use split-brain DNS to handle cases where a host resolves to an RFC 1918 address for internal clients, but resolves to a routable address for DMZ or colo clients. If we move the now-internal authoritative servers out to the DMZ, could we still use view and acl statements to service internal requests? IOW, what's the benefit of moving hidden master and authoritative dns slaves to the DMZ? – user8162 Sep 26 '13 at 18:19
  • I hope this isn't too far down in the comments to get picked up, but I'm still curious as to the advantages/disadvantages of placing hidden master and authoritative slaves in the DMZ vs. the internal network. I see at least one disadvantage; what's the upside? Thanks – user8162 Oct 01 '13 at 00:56
  • Placing the hidden in the DMZ is a rough call. By placing the authoritative slaves in the DMZ, you might be able to avoid a lot of work setting up and maintaining network firewalls related to access to the slaves. At the same time external slaves could peer from the master directly without any firewall voodoo. Moving the two sets of servers from the internal network to the dmz moves extraneous [potentially external] out of and away from the internal network [which is presumed to be on some form of lock down] – Daniel Widrick Oct 01 '13 at 01:02
  • Thanks. In general I'd agree that it's a lot easier to omit firewalls when doing server designs. In this case, I'd neglected to show that the same firewall screens both DMZ and internal traffic on different interfaces, so I'm stuck with firewall issues no matter what I do. The only advantage I can see of leaving the hidden master internal is that I won't have to "re-NAT" the delegation of internal zones. NAT is a pain, and can't go away soon enough for me. Thanks again for all your good advice. – user8162 Oct 01 '13 at 17:32