11

BREACH, a new attack on SSL that targets HTTP compression, has recently been publicly announced.

I manage a few web servers. How can I audit them to check which of them are potentially vulnerable to BREACH? Is there a simple way to scan a web server to check this?

(The emerging advice for how to defend against BREACH seems to be: turn off HTTP compression. So, a way to audit a web server to check whether it has disabled HTTP compression might be sufficient. At time of writing, the Qualys SSL Labs SSL tester, available via the SSL Pulse page, does not seem to test for this.)

D.W.
  • 98,420
  • 30
  • 267
  • 572
  • If you have HTTP compression and you use SSL, then you're more likely to be vulnerable. It's as simple as that. Here's a way to test for that https://scottlinux.com/2012/09/13/enable-or-disable-compression-in-apache/ – Adi Aug 02 '13 at 19:02
  • I'd prefer if _you_ try that solution out and then let us know in an answer what you find with your tests and how you worked it out on your servers, and then I'd be happy to upvote that. It'd be more of a hands-on experience and I'm sure many of us would appreciate that. – Adi Aug 02 '13 at 19:24

2 Answers2

9

Stricto sensu, you cannot really have a generic test. In HTTP, the client announces whether it supports compression with an Accept-Encoding header line. The server will then feel allowed to use these compression schemes. @Adnan points to this blog post which describes how one can manually send a HTTP request to a server and see what the server responds with. As an audit method, to check for HTTP compression support, this has two problems:

  1. There is no finite list of possible compression algorithms. At least gzip, deflate and compress are common. With TLS-level compression, at least, each "compression method" was specified by a single byte, so there was only 255 possible compression methods (not counting the "no compression" method), and it was possible to be exhaustive. This is not the case here.

  2. The server is free to apply compression or not, as it sees fit, on any document. For instance, the Apache documentation shows how the decision to compress can be made to depend on the type of target document but also its emplacement in the site directory tree. IIS makes a distinction between "static compression" and "dynamic compression": static compression is applied only on pages or document which are physically files on the disk. In the case of IIS, the decision to compress, or not, is then taken depending on how the requested page is generated, so it can vary for each individual URL...

Not having much details on the attack yet, I still feel as if it is, by nature, very specific to each target site, so we have a few days before us; there is no need to already panic and declare a ban on compression. Let's first get some details on the actual attack. In other words, the "emerging advice" seems a bit premature to me.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • A commenter in the first BREACH link above pointed out that *any compression of secrets* is what places a page at risk. To your point, there is not necessarily a generic "all's clear" on a web server if the possibility remains that one page can have a compressed secret. But to theirs, BREACH is both browser and server agnostic, so any compressed secrets you are sending are vulnerable, whether they originate from an Apache, IIS, or other server. – John Deters Aug 02 '13 at 21:15
1

Well, it is 2021 now You could use https://www.ssllabs.com/ssltest/ or https://github.com/drwetter/testssl.sh

ibrasec
  • 11
  • 1