29

I was planning to sign my DNS zone with DNSSEC. My zone, the registrar and my DNS server (BIND9) all support DNSSEC. The only one who doesn't support DNSSEC is my secondary nameserver provider (namely buddyns.com).

On their website, they state this regarding DNSSEC:

BuddyNS does not support DNSSEC because it exposes to some vulnerabilities unsuited to a high-volume DNS service.

Well, I thought the use of DNSSEC is currently somehow questionable as most resolvers don't check if the records are signed correctly. What I didn't know was that - according to their statement - it seems like providing it would expose security vulnerabilities of some kind.

What are those "vulnerabilites"?

Johann Bauer
  • 402
  • 4
  • 9

3 Answers3

106

DNSSEC has some risks, but they are not directly related to reflection or amplification. The EDNS0 message size expansion is a red herring in this case. Let me explain.

Any exchange of packets that does not depend on a previous proof of identity is subject to abuse by DDoS attackers who can use that unauthenticated packet exchange as a reflector, and perhaps also as an amplifier. For example, ICMP (the protocol behind "ping") can be abused in this way. As can the TCP SYN packet, which solicits up to 40 SYN-ACK packets even if the SYN was spoofed to come from some victim who doesn't want those SYN-ACK packets. And of course, all UDP services are vulnerable to this attack, including NTP, SSDP, uPNP, and as noted by other responses here, also including DNS.

Most intrusion detection, intrusion prevention, and load balancer appliances are bottlenecks, unable to keep up with "line rate" traffic. Also many routers can't run at line rate, and some switches. These bottlenecks, by being the smallest thing "in the path", and smaller than the links themselves, are the actual target of congestion-based DDoS attacks. If you can keep somebody's firewall busy with attack traffic, then good traffic won't get through, even if the links aren't full. And what slows down a firewall isn't the total number of bits per second (which can be increased by using larger messages, and EDNS0 and DNSSEC will do), but rather the total number of packets per second.

There's a lot of urban legend out there about how DNSSEC makes DDoS worse because of DNSSEC's larger message size, and while this makes intuitive sense and "sounds good", it is simply false. But if there were a shred of truth to this legend, the real answer would still lay elsewhere-- [because DNSSEC always uses EDNS0, but EDNS0 can be used without DNSSEC], and many normal non-DNSSEC responses are as large as a DNSSEC response would be. Consider the TXT records used to represent SPF policies or DKIM keys. Or just any large set of address or MX records. In short, no attack requires DNSSEC, and thus any focus on DNSSEC as a DDoS risk is misspent energy.

DNSSEC does have risks! It's hard to use, and harder to use correctly. Often it requires a new work flow for zone data changes, registrar management, installation of new server instances. All of that has to be tested and documented, and whenever something breaks that's related to DNS, the DNSSEC technology must be investigated as a possible cause. And the end result if you do everything right will be that, as a zone signer, your own online content and systems will be more fragile to your customers. As a far-end server operator, the result will be, that everyone else's content and systems will be more fragile to you. These risks are often seen to outweigh the benefits, since the only benefit is to protect DNS data from in-flight modification or substitution. That attack is so rare as to not be worth all this effort. We all hope DNSSEC becomes ubiquitous some day, because of the new applications it will enable. But the truth is that today, DNSSEC is all cost, no benefit, and with high risks.

So if you don't want to use DNSSEC, that's your prerogative, but don't let anyone confuse you that DNSSEC's problem is its role as a DDoS amplifier. DNSSEC has no necessary role as a DDoS amplifier; there are other cheaper better ways to use DNS as a DDoS amplifier. If you don't want to use DNSSEC, let it be because you have not yet drunk the Kool Aid and you want to be a last-mover (later) not a first-mover (now).

DNS content servers, sometimes called "authority servers", must be prevented from being abused as DNS reflecting amplifiers, because DNS uses UDP, and because UDP is abusable by spoofed-source packets. The way to secure your DNS content server against this kind of abuse is not to block UDP, nor to force TCP (using the TC=1 trick), nor to block the ANY query, nor to opt out of DNSSEC. None of those things will help you. You need DNS Response Rate Limiting (DNS RRL), a completely free technology which is now present in several open source name servers including BIND, Knot, and NSD. You can't fix the DNS reflection problem with your firewall, because only a content-aware middlebox such as the DNS server itself (with RRL added) knows enough about the request to be able to accurately guess what's an attack and what's not. I want to emphasize, again: DNS RRL is free, and every authority server should run it.

In closing, I want to expose my biases. I wrote most of BIND8, I invented EDNS0, and I co-invented DNS RRL. I've been working on DNS since 1988 as a 20-something, and I am now grumpy 50-something, with less and less patience for half-baked solutions to misunderstood problems. Please accept my apologies if this message sounds too much like "hey you kids, get offa my lawn!"

Andrew B
  • 31,858
  • 12
  • 90
  • 128
Paul Vixie
  • 1,144
  • 1
  • 11
  • 10
  • 7
    Confirming that this is Real Paul™. – Andrew B Jan 06 '16 at 20:50
  • The bottleneck could be either number of packets or number of bytes depending on the specific mixture of packets. Even if we assume an attack scenario which targets only number of packets per second, the larger replies still matter if they are large enough to trigger fragmentation. It is true that this is not an attack specifically on DNSSEC. It applies to any scenario in which responses are large and EDNS0 is used to permit such large responses. Is DNSSEC usable at all without using EDNS0 to permit larger responses? – kasperd Jan 06 '16 at 21:38
  • I disagree with the assertion that rate limiting is the one and only solution. A DNS server which does rate limiting will itself become an easier target for DDoS attacks. Ultimately I believe an extra roundtrip using some sort of cookie is the best way to avoid both of those problems. There are many ways in which such a cookie could be done. Enforcing TCP is interesting only because it has a field that can be used as cookie without introducing any new standard. – kasperd Jan 06 '16 at 21:53
  • 1
    @AndrewB that can't be the Real Paul™, there are capital letters in his post! ;-) – Alnitak Jan 06 '16 at 22:17
  • 6
    @kasperd see "draft-ietf-dnsop-cookies", currently progressing through IETF. – Alnitak Jan 06 '16 at 22:18
  • @Alnitak Okay, I lied. :( It's Blog Paul™. – Andrew B Jan 06 '16 at 22:19
  • @Alnitak I read through the latest [draft](https://tools.ietf.org/html/draft-ietf-dnsop-cookies-08). A cookie transmitted using EDNS was indeed one of the possibilities I had in mind when mentioning the use of cookies. I am glad to see that this is further ahead than I knew about. I would probably have aimed for a cookie transmitted using destination options such that it doesn't have to be reinvented for each protocol. But a cookie just for DNS is probably easier to get deployed than a generic cookie. – kasperd Jan 06 '16 at 23:59
  • @Alnitak It doesn't address the problem of how to support the entire installed base of clients without support for DNS cookies. But the existence of DNS cookies means that the drawbacks of enforcing TCP or rate limits for clients without DNS cookies support is much more tolerable. – kasperd Jan 07 '16 at 00:04
  • kasperd: [Is DNSSEC usable at all without using EDNS0 to permit larger responses?] yes and no. edns0 still has to be signaled, but the response can be kept small. hosting services who auto-sign for their customers want to avoid fragmentation and large packets, and this can be done. kasperd: [ flag I disagree with the assertion that rate limiting is the one and only solution.] huh. i didn't say it was the only solution. i said it has to be done, and i said that none of the other things i mentioned would help as much, and that once rrl is done, the other things are totally unnecessary. – Paul Vixie Jan 07 '16 at 00:05
  • 4
    kasperd: [A DNS server which does rate limiting will itself become an easier target for DDoS attacks.] i know i'm an idiot, but i'm not that idiot. dns rrl makes you less safe in no way whatsoever. it's not a defense against all known attacks, but it is a creator of no new attacks. – Paul Vixie Jan 07 '16 at 00:06
  • 2
    @kasperd the installed base is always a problem - there's no solution that will work even on the compliant installed base, let alone the non-compliant systems out there. The good news is that EDNS cookie support is already in the codebase for BIND 9.11 and (AIUI) will be turned on by default. – Alnitak Jan 07 '16 at 00:12
  • @PaulVixie I assume you are not trying to imply that rate limiting can somehow distinguish with certainty between requests with spoofed source IP address and requests with legitimate source IP address. If you cannot distinguish reliably between those two cases, it means that the rate limiting may drop legitimate requests, which is a DoS vector. – kasperd Jan 07 '16 at 00:22
  • kasperd: [I assume you are not trying to imply that rate limiting can somehow distinguish with certainty between requests with spoofed source IP address and requests with legitimate source IP address.] right, i said and meant "only a content-aware middlebox such as the DNS server itself (with RRL added) knows enough about the request to be able to accurately guess what's an attack and what's not." [If you cannot distinguish reliably between those two cases, it means that the rate limiting may drop legitimate requests] yes! [which is a DoS vector.] no! because "transaction" != "request". – Paul Vixie Jan 07 '16 at 00:44
  • @kasperd you should read up further on RRL. Some packets may be dropped, but there's a "slip" factor which results in a truncated UDP response being sent. A legitimate client will then retry over TCP. – Alnitak Jan 07 '16 at 07:58
  • @Alnitak I think you mean read up on a specific implementation of rate limiting. In that case it is essentially the same solution I proposed except that it is only activated above a certain rate of traffic. – kasperd Jan 07 '16 at 08:08
  • @kasperd No, I mean read up on DNS RRL as specified by the author of this post, as originally implemented for BIND and then implemented almost identically in the other leading authoritative DNS servers. – Alnitak Jan 07 '16 at 09:51
  • @Alnitak I can't find that RFC. What is the number? – kasperd Jan 07 '16 at 10:25
  • @kasperd it's not in an RFC (and there's no reason it had to be). It's described at http://ss.vix.su/~vixie/isc-tn-2012-1.txt – Alnitak Jan 07 '16 at 11:21
  • @Alnitak No, it doesn't have to be an RFC. But if it is not an RFC you cannot simply say rate limiting and assume that the behavior in some document you didn't reference is implied. – kasperd Jan 07 '16 at 13:32
  • @kasperd Vixie said that he co-invented DNS RRL, so a specific implementation is in fact being assumed here. Since that implementation is present in multiple nameserver daemons, many operators tend to assume that is the implementation being discussed as well. Let's just take it for granted that we're all on the same page now. – Andrew B Jan 07 '16 at 13:40
  • @AndrewB That document explicitly says that it only applies to communication between recursor and authoritative server. Let's leave the question about what to do with communication between stub and recursor aside for now. The document suggests the value 3 for LEAK-RATE, and that it should approximate the retry count of legitimate clients. I have seen legitimate recursors which did not retry at all. The document does not specify exactly how LEAK-RATE and TC-RATE interact. The specifics of that interaction will very likely influence the end result. – kasperd Jan 07 '16 at 14:04
  • I didn't say "rate limiting" - you did. This answer was very clearly talking about DNS RRL. – Alnitak Jan 07 '16 at 14:33
  • these comments have now been translated to swedish: https://www.iis.se/blogg/lyssna-inte-pa-irrlaror-om-dnssec-lar-er-i-stallet-hur-det-fungerar/ – Paul Vixie Jan 14 '16 at 17:41
  • these comments have now been translated to japanese: http://www.e-ontap.com/dns/vixiesgrumble.html – Paul Vixie Jan 16 '16 at 02:24
  • Rate limiting rarely helps reduce an amplification attack. The most one can do is keep their records small when they can, and [dont answer](http://serverfault.com/questions/744613/block-any-request-in-bind/744620#744620) for things you aren't authoritative for. RFC to follow. It is up to the ISP's to stop traffic for IP's that should not originate on the wrong side of their border, to minimize such attacks. – Aaron Jan 25 '16 at 14:33
  • 1
    <> this is not true. see http://family.redbarn.org/~vixie/afilias.png for an example of what happens on an authority server (this one serves .INFO) when they turn on DNS Response Rate Limiting (RRL). perhaps you meant to restrict your comment to recursive name servers? – Paul Vixie Jan 26 '16 at 22:52
  • 1
    @kasperd Related to the "install base" --> we are now talking in the DNS industry on how to make things move faster than a glacial pace. The reality is that the all DNS Operators (from tiny to huge) just have to get their sh*t together and upgrade their DNS servers on regular basis. It really shouldn't take 10 years to add new algorithm into DNSSEC. As the attacks on DNS and using DNS are becoming more common, it will be more and more important to take proper care of your DNS. – oerdnj Apr 24 '16 at 19:35
  • @oerdnj I agree it shouldn't take 10 years. Sadly I have seen much worse than 10 years for other upgrades. So if DNS can do it in "just" 10 years, it looks good in comparison. But who is really to blame? If an administrator uses something old but officially supported such as for example Ubuntu 12.04, are DNS cookies then going to be supported and enabled by default in relevant client and server code? If the answer is no, then you could blame Canonical. If the answer is yes, you can blame the administrators still running something older than that. – kasperd Apr 24 '16 at 19:56
  • @kasperd I think it's obvious we need to take an action to change the perception of DNS in the industry. HTTP Servers did that. Maybe we just need more vulnerabilities in the protocol with catchy names :)). (Disclaimer: I am related to Knot DNS mentioned in the blog post.) – oerdnj Apr 24 '16 at 20:07
7

I know of two specific vulnerabilities. There is the reflection/amplification mentioned by Håkan. And there is the possibility of zone enumeration.

Reflection / amplification

Reflection means attacks in which requests with a spoofed source IP are sent to a DNS server. The host being spoofed is the primary victim of the attack. The DNS server will unknowingly send the reply to a host which never asked for it.

Amplification refers to any reflection attack in which the reflected response consists of more bytes or more packets than the original request. Before DNSSEC+EDNS0 amplification in this way would only allow up to 512 bytes to be sent. With DNSSEC+EDNS0 it is possible for 4096 bytes to be sent, which typically spans 3-4 packets.

There are possible mitigations for these attacks, but I don't know of any DNS server implementing them.

When the client IP has not been confirmed, the DNS server can send a truncated response to force the client to switch to TCP. The truncated response can be as short as the request (or shorter if the client uses EDNS0 and the response does not) which eliminates the amplification.

Any client IP which completes a TCP handshake and send a DNS request on the connection can be temporarily whitelisted. Once whitelisted that IP gets to send UDP queries and receive UDP responses up to 512 bytes (4096 bytes if using EDNS0). If a UDP response triggers an ICMP error message, the IP is removed from the whitelist again.

The method can also be reversed using a blacklist, which just means that client IPs are allowed to query over UDP by default but any ICMP error message cause the IP to be blacklisted needing a TCP query to get off the blacklist.

A bitmap covering all relevant IPv4 addresses could be stored in 444MB of memory. IPv6 addresses would have to be stored in some other way.

Zone enumeration

Whether zone enumeration is a vulnerability in the first place is subject of debate. But if you don't want all names in your zone to be public knowledge, you would likely consider it a vulnerability. Zone enumeration can mostly be mitigated through the use of NSEC3 records.

The problem which still persists even when using NSEC3 is that an attacker can find the hash of each label by simply querying for random names. Once the attacker has all the hashes an off-line brute force attack can be performed on those hashes.

A proper defense against zone enumeration would require an attacker to perform a query to the authoritative server for every guess. However no such defense exists in DNSSEC.

kasperd
  • 29,894
  • 16
  • 72
  • 122
  • 2
    Zone enumeration does not seem like a concern for the service provider, though? (Rather a possible concern for the zone "owner", depending on their views and preferences.) – Håkan Lindqvist Jul 24 '15 at 17:56
  • @HåkanLindqvist That's right. Maybe my question was more specific than I wanted it to be. I've found this information very interesting. – Johann Bauer Jul 24 '15 at 18:41
  • The idea of whitelisting a client that tried TCP has been considered, but is apparently patented. – Alnitak Jan 06 '16 at 22:35
4

The thing that comes to mind is not actually DNSSEC specific but rather about the EDNS0 extension, which DNSSEC relies on.

EDNS0 allows for larger UDP payloads and larger UDP payloads can allow for worse traffic reflection/amplification attacks.


I don't know what the percentage of validating resolvers is but popular nameserver software seems be shipped with validation on by default and one will easily find some notable service providers that are open about them doing validation, such as Comcast and the Google public resolvers.

Based on this, I would think that the percentage of validating resolvers is probably in significantly better shape than the percentage of signed zones.

Håkan Lindqvist
  • 33,741
  • 5
  • 65
  • 90
  • Yeah, I was thinking that the beef might really be with EDNS too. It's awfully strange to be picking the bone with DNSSEC instead of that though. – Andrew B Jul 24 '15 at 01:41