4

After reading some popular questions and answer on this website about BREACH, the only advice seems to be: don't compress anything that might contain secrets (including CSRF tokens). However, that doesn't sound like great advice. Most websites are actually compressing everything, so I wonder what they are doing exactly to prevent BREACH. I just checked the page with the form for changing your password here on StackExchange, and it's compressed. It looks like everything is compressed on Google too, and a lot of other important websites that are supposed to care about security. So what are they doing to prevent BREACH?

Here's a list of possible solutions I've been able to gather:

  • Disable compression completely. This means wasting bandwidth and no one seems to be doing this,
  • Only compress static resources like CSS and JS. Good idea, that's the quickest solution to implement, and that's what I plan to do on a few websites that I need to optimize.
  • Checking referrers and avoid compression whenever the requests from unauthorized websites. Interesting idea, but it almost sounds like a "dirty trick" and it's far from perfect (some clients suppress referrers, all traffic coming from other websites and search engines will end up loading uncompressed pages, etc.)
  • Rate limiting the requests. This it definitely implemented by Google, since if you click on too many links too fast you might see a CAPTCHA (it happened to me sometimes, while checking a website's position in the SERP, I was literally behaving like a bot). But are websites really relying on this to mitigate BREACH? And is it even reliable? What is a sensible and effective limit to set, for example?
  • Use CRSF tokens in HTTP headers instead of the body of the page. I haven't noticed something like this on StackExchange, but Google seems to have interesting HTTP headers that look like tokens. I guess this will really mitigate the issue, provided the tokens are always checked (even just to display information, not only to change it). I guess this is the perfect solution, but it's the hardest to implement unless you do it from scratch (it would require rewriting several parts of your application).

So the questions are: are the above points valid? Are there any other options? And what are the websites that follow best practices actually doing?

reed
  • 15,398
  • 6
  • 43
  • 64
  • Compression of static resources seems to be the most common mitigation, followed by just disabling compression (given that a lot of JS and such is minified alreads, so compression would not yield that much of a benefit anyways), or just not caring about BREACH at all. –  Dec 12 '19 at 17:16
  • @MechMK1, not caring at all is definitely a popular approach. Those who care about security probably just hope to detect attacks by analyzing the traffic and rate limiting the requests. Learning about BREACH was a shock to me, since HTTPS and compression are both considered important improvements nowadays. In Google's tools, Google will complain if your website is slow for example, and will suggest to enable compression. Google will also not like websites that don't use HTTPS. It was like discovering that eating plenty of vegetables AND quitting smoking is actually bad for your health. WTF. – reed Dec 12 '19 at 17:43
  • First of all what is the risk that you are trying to cover? What is your analysis on the threat model for your need/business? Is the compression relevant inside an SSL tunnel? Is it related with the lack if capability to inspect the compressed packets? – Hugo Dec 12 '19 at 17:51
  • @reed To be fair, it is extremely difficult to exploit BREACH in practice, because the attacker needs to be able to inject partial data into the response. –  Dec 12 '19 at 20:21
  • doesn't https compress headers nowadays? if so, the inclusion of a guid is going to make BREACH hard. – dandavis Dec 12 '19 at 20:53
  • @MechMK1, I thought the hard part was actually the ability to control the traffic, to see the size of the responses. Injecting data in a response can be done in several ways, like in posts, comments, searches, etc. OTOH, I'm not sure how the size of the response can be checked. Can it be checked directly from the browser? Or does the attacker need to control the network as a MITM? – reed Dec 12 '19 at 23:35
  • 1
    @reed Yes, it may be done via comments or posts, but you need to have those in the same response as the client secret (which, in most applications, is not displayed together with any comments). Furthermore, you need to be able to edit your payload for every attempt, and then cause a new network request from the victim. It's a really complicated setup, which is why it's not used nearly as often as injection attacks or CSRF, as they are comparatively easy to exploit. –  Dec 13 '19 at 08:47

3 Answers3

5

There are several ways to mitigate BREACH effectively, but all of them have trade-offs. In order to understand how these mitigations work, we need to look at how BREACH exactly works:

How do I BREACH TLS security?

The secret ingredients, according to p. 10 of this presentation on BREACH, are as follows:

  • Compression of the response body
  • A stable page
  • A secret in the response body
  • Attacker-supplied data
  • A known prefix

What sticks out the most is that both attacker-supplied data and the client secret need to be in the same response. Furthermore, the attacker needs to be able to cause the client to receive a lot of responses, and all of them need to contain the modified input from the attacker.

So, for instance, if I wanted to use BREACH to extract your session cookie, I can't just write my payload (e.g. 4bf8dfc73...) into this answer body and update it continuously. You actually need to receive the response for every single update.

As you see, this is quite a convoluted setup. Sure, it is possible to do all of this with some <iframe>-magic, but those have fallen largely out-of-favour, to the point where some sites refuse to be loaded via <iframe> or instruct browsers not to open <iframe>'s within the document.

Knowing all that, what do sites do to mitigate BREACH?

They just ignore it

This is one of the most common approaches. It may seem insecure at first, but given how many things need to align for BREACH to become viable, it certainly seems like just not caring about it is quite a viable strategy.

It certainly is viable if: - No client-secrets are in any response - The site is purely static

Just compress static resources

This is another very common alternative. All static compressible resources, such as stylesheets or scripts, will be compressed, while all dynamic resources will be delivered as-is.

It should be noted however that many such resources are already minified, meaning that further compression will not yield much better results.

However, with this approach you get at least some compression done, while you can make sure that you are not vulnerable to BREACH.

Disable compression altogether

Another viable strategy. As I mentioned above, many static resources are somewhat compressed anyways, so further compression will yield diminishing returns anyways. Just disabling compression can be just fine too, especially when your network resources are good anyways.

Load secrets separately

Another mitigation is to load client-secrets separately, where no attacker-controlled data can be injected. This way, compression can be enabled for all requests, and attacker-controlled data is always separated from client-secrets.


I'm sure that there are other ways that people have attempted to mitigate BREACH, all with varying degrees of success. But these are methods I've seen and that I would personally agree with (yes, even not caring).

  • The way I understand it (but I'm not sure I understand it correctly) is that the repeated attempts can simply be made by a malicious website that you visit, which will make AJAX requests to the vulnerable site. The malicious website can't read the content of the responses (because they are cross-domain), but I don't know if it can check their sizes (which is what the attacker actually needs). If the attacker is also able to control the network though (MITM), then the attack will work (they can see the size of the response). – reed Dec 13 '19 at 11:56
0

As with all advice from any professional, there is a risk assessment that can only be done by YOU.

  • Can you run without Compression? Yes.
  • What happens if you suffer a breach? (Your answer: Do you go to Jail - get fined, ???)

The standards and recommendations are clear: For TLS 1.2 and earlier at stated in Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)

"In order to help prevent compression-related attacks (summarized in Section 2.6 of [RFC7457]), implementations and deployments SHOULD disable TLS-level compression (Section 6.2.2 of [RFC5246]), unless the application protocol in question has been shown not to be open to such attacks."

And as stated The Transport Layer Security (TLS) Protocol Version 1.3 Section 1.2 "the removal of compression"

OWASP Transport Layer Protection Cheat Sheet "TLS compression should be disabled in order to protect against a vulnerability (nicknamed CRIME) which could potentially allow sensitive information such as session cookies to be recovered by an attacker."

jwilleke
  • 221
  • 1
  • 6
  • 2
    TLS compression and HTTP compression are not the same though. BREACH focusses on HTTP compression, while CRIME focuses on TLS compression. –  Dec 13 '19 at 10:11
-2

According to Qualys SSL Lab, BEAST is no longer relevant due to client side mitigations: https://blog.qualys.com/ssllabs/2013/09/10/is-beast-still-a-threat

candrews
  • 107
  • 1
  • BEAST is (or was) a **completely different** attack than BREACH and not related to compression at all. @Esa: the status now is that BEAST is even more irrelevant, because almost nobody is still using TLS1.0 (and certainly not SSL3). – dave_thompson_085 May 30 '20 at 01:53
  • Wow, that was horrible mistake. I got BEAST and BREACH confused; despite both being short, all capital words starting with B, they are very different vulnerabilities. I apologize for the confusion! – candrews May 31 '20 at 02:05