119

From what I've read, the issue is as simple as performing step 3 of a 4-step handshake and the consequences of performing that step more than once. Considering the complexity of these kinds of algorithms, I'm somewhat surprised that it is so 'simple' of a concept.

How can it be that a system of this complexity was designed without anyone thinking about what would happen if you performed the step twice? In some sense, it feels like this should have been obvious. It's not really a subtle trick, it's a relatively blatantly obvious defect, or at least that's the impression I'm getting.

Dave Cousineau
  • 880
  • 2
  • 7
  • 9
  • 131
    It may have been discovered sooner ... just not publicly disclosed. – Andrew Grimm Oct 17 '17 at 01:57
  • In the title you ask why it was not discovered sooner which I interpret as anything from start of design to now. But in the body you seem to care more about the initial design, i.e. start from design until release of standard. Which of these is what you are really asking? – Steffen Ullrich Oct 17 '17 at 02:51
  • @SteffenUllrich, I think both are clearly encompassed in the title of the question. Why wasn't the exploit discovered **sooner** (than it was)? – Wildcard Oct 17 '17 at 02:52
  • 3
    @Wildcard: the difference in interpreting this question is between asking why it was not discovered during the initial (closed) design process (which is the interpretation Polynomial seems to answer) vs. why it was not discovered while several open implementations existed or were developed (which is what I read into the question). – Steffen Ullrich Oct 17 '17 at 02:54
  • @SteffenUllrich, ah, I see. It seems the two existing answers address these two different aspects, such that the OP "clarifying" which he intended would invalidate one or the other of them. I've upvoted both (and suggested an edit on yours). :) – Wildcard Oct 17 '17 at 03:13
  • 37
    Hindsight always seems obvious... – Andy Oct 17 '17 at 13:14
  • 17
    "How can this be?" and "How could this happen?" type questions are usually asked not as a way to request actual information, but as a way to cast aspersions on those who failed to anticipate the problem. – barbecue Oct 17 '17 at 16:23
  • 2
    @barbecue yes, tbh I didn't really realize that it would play out that way but maybe I should have. I didn't really mean it in a "who's to blame" sort of way. I meant it more as "is this not as obvious as it seems" or "is this not as rigorous as I'd expect it to be". As I explained, it seems like it might have been really obvious, the kind of thing that could be avoided just by failing when something that shouldn't be repeated is repeated. And if it's more complex than that, then that's what I'm interested in. – Dave Cousineau Oct 17 '17 at 16:55
  • 5
    @sahuagin it might be better to word such questions as "what" rather than "how". "What kinds of procedures were in place, and could they have detected this flaw?" or "What kinds of software testing methodologies could have detected this flaw earlier in the process." – barbecue Oct 17 '17 at 18:55
  • 3
    I'm sure 999 times as many problems *were* discovered sooner, which means this question is based on survivor bias. (Though then the question is why we only find 99.9% of problems sooner) – user253751 Oct 17 '17 at 21:56
  • @AndrewGrimm is right that it may have been secretly discovered. My guess is that the NSA or worse yet, some foreign agency has been using this against us for years, and intentionally didn't want us to discover it. – NH. Oct 18 '17 at 16:17

3 Answers3

202

The 802.11 specification that describes WPA2 (802.11i) is behind a paywall, and was designed by a few key individuals at the IEEE. The standard was reviewed by engineers, not by cryptographers. The details of the functionality (e.g. retransmission) were not widely known about or studied by security professionals.

Cryptographer Matthew D Green wrote a blog post about this subject, and I think this section sums it up quite nicely:

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications — you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months — coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

Polynomial
  • 132,208
  • 43
  • 298
  • 379
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/67231/discussion-on-answer-by-polynomial-why-wasnt-the-krack-exploit-discovered-soone). – Rory Alsop Oct 17 '17 at 14:27
  • +1 although this does then raise the question of why a security protocol was so widely adopted despite lack of heavy scrutiny, something that isn't the case for example with encryption or hashing algorithms. – Jon Bentley Oct 19 '17 at 11:25
  • 6
    While the specification may have been written by engineers, it was ultimately implemented by software developers. Security-related features (like secure handshakes) should have been built to safely deal with replay attacks even though the spec said nothing about it. The fact that "KRACK" is so easy to mitigate by all vendors without an official change to the specification shows just how easily this could have been handled way back in 2004 when WPA2 -- a standard established in particular to fix the dismal security in WEP/WPA(1) -- was published. – Christopher Schultz Oct 19 '17 at 16:35
72

In some sense, it feels like this should have been obvious.

Remember Heartbleed, Shellshock, POODLE, TLS Triple Handshake attack, "goto fail", ... ?

In hindsight, most of these problems seem to be obvious and could have been prevented if the right people just had a closer look at the right time at the right place. But, there is only a limited amount of people with the right technical expertise and these usually have lots of other things to do too. Please don't expect them to be perfect.

Instead of having illusions about standards being perfectly designed, software being bug free and systems being 100% secure one should accept that this is impossible to achieve in practice for today's complex systems. To mitigate this one should care more about resilience and robustness, i.e. staying safe and secure even if some parts break by layering security, not fully trusting anything and having plans if something breaks.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/67230/discussion-on-answer-by-steffen-ullrich-why-wasnt-the-krack-exploit-discovered). – Rory Alsop Oct 17 '17 at 14:27
  • 1
    Also the fact that anyone questioning the security of said protocol's handshake would be told it's mathematically proven to be secure - in security we often shoot for the low hanging fruit -- normally no one will go after something already 'proven' to be mathematically secure. (I believe this is also why the heartbleed exploit wasn't discovered sooner too). – djsmiley2kStaysInside Oct 17 '17 at 15:07
  • 18
    To pervert Linus's Law somewhat: "In hindsight, all bugs are shallow." – Polynomial Oct 17 '17 at 16:23
  • 3
    Those issues are mostly different in that they are implementation bugs (mostly because C is dangerous or because OpenSSL is a massive mess of code). Whereas KRACK is a bug in the spec that no one who reviewed it or implemented it noticed – mcfedr Oct 18 '17 at 08:03
  • 2
    @djsmiley2k for Heartbleed, it seems like a packet with two distinct length fields and "what if these weren't equal" _is_ low-hanging fruit. – Random832 Oct 18 '17 at 19:00
  • @Random832 That's very common in programming. You have to act on a buffer, and in C, you usually have to specify the size of the buffer. In many, many cases, the size is fixed, so you can hardcode the size with things like `memcpy(dst, src, 64)` or so. The problem comes when the size of one of the buffers is attacker-controlled, which isn't always obvious when it's surrounded by dozens of fixed-sized buffers. Static analysis helps here, but it can only go so far. – forest Dec 12 '17 at 01:29
29

The paper describing KRACK discusses this very issue in section 6.6.

A couple of points: There were ambiguities in the specification. Also formal proofs of specification are based on a model of the specification, and there are times when that model does not match the actual specification, much less matching the implementations based on that specification.

Dale Wilson
  • 391
  • 3
  • 3