7

I have systems that perform cryptographic operations within a SSL/TLS tunnel. My concern is that I may leak timing information when encrypting, decrypting, or hashing.

Part 1

  1. Is it a good idea to have a fixed processing time (or increment thereof) when performing these crypto operations?
  2. What operations should this apply to (encrypt/decrypt or hashing)?
  3. Are some operations not sensitive to leaking timing information? (decrypt a file, hash a password, using PGP, RSA, AES, etc)

For example:

  • When encrypting user supplied data to a private key, the total operation will always take 2 seconds

  • When decrypting a file, the thread will not return until the total time has reached some multiple of 0.5 seconds

Part 2

  1. If using the "multiple of" approach, what is a good value for the multiple? 0.5, 0.005 seconds?
B-Con
  • 1,832
  • 12
  • 19
makerofthings7
  • 50,090
  • 54
  • 250
  • 536
  • 1
    Good crypto libraries are immune to this by not using any branches, secret dependent lookups etc. But if you have badly designed protocols(such as MAC then encrypt with CBC) it's hard to write a good implementation. – CodesInChaos Feb 12 '13 at 16:49

2 Answers2

10

Fixed processing time is the most basic and thorough method to defend against timing attacks: to avoid exploitation of the data you leak through timing, well, do not leak. However, it is easier said than done. The recent "Lucky Thirteen" shows that it is possible to detect timing differences with a microsecond accuracy when the target is "close" to the attacker (on the same LAN); the attack details exploit the small leak which remained in SSL implementations which were patched with regards to timing attacks -- these implementations were computing the MAC even when the decryption did not yield a proper padding, precisely so as to get a processing time which was as fixed as could be achieved; but a one-microsecond variability was hard to avoid.

If you enforce a fixed "overall" processing time (e.g. 1 second for each request) then you will run into trouble with multi-tasking operating systems. If the implementation is just a sleep() call after the main processing, then the CPU is free to handle other requests, which looks good... but leaks data, since the attacker could send other requests simultaneously and work out whether your server is busy or not. Generally speaking, by sending a lot of requests to your server, the attacker can make processing time for each request arbitrarily long, and exceed any fixed limit. If you just constrain the processing time to be an integral multiple of a fixed granularity (e.g. processing time is always n times 0.1 second, with n being an integer) then you just have "thresholds" which make the task a bit harder for the attacker, but not impossible -- and it could even help the attacker. Indeed, the "Lucky Thirteen" attack shows how the exploit of such a threshold (corresponding to the way SHA-1 and SHA-256 process data by 64-byte blocks) just needs some "phasing" and makes the statistical picture clearer.

Right now, the best defense seems to be the addition of a random delay of a few milliseconds, which will blur the picture and prevent analysis... and, also, don't let the attacker plug in the same LAN as the server. Which should go without saying.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • I just [rediscovered this answer](http://security.stackexchange.com/a/7755/396) (see comments) that talks about when to use fixed duration for encryption. I'm trying to mentally synthesize all answers now... – makerofthings7 Feb 12 '13 at 16:35
  • 1
    Addition of a random delay will increase the complexity of the analysis (and the number of samples needed), but ultimately I think that any underlying variance in timing that is dependent on secrets will still leak - the random delay will 'average out', but the leakage won't. – Michael Feb 13 '13 at 13:39
2

The basic essence of a timing attack is to send a server different data samples and see if different inputs produce different code paths by looking at how much time was taken to process the input. The best defense against timing attacks is to keep the code on the same execution path regardless of the input.

This is far easier said than done. A simple function like memcmp() does not guarantee the same code path because it exits after the first byte mismatch; different inputs will cause it to exit from different points. A natural response is to write a custom memcmp()-esque function, but doing so is not straight forward either.

The biggest issue is how errors and validation are handled. Regardless of preprocessing/validation/sanity-check errors, the same code should be executed even if you realize the output will be thrown away later. When you look at practical timing attacks, an example that pops up frequently is varying time for MAC validations. The timing of raw cryptographic primitives aren't really a concern; the bit-mangling operations of something like AES or SHA2 will take the same amount of time regardless of what input strings they get. The main focus should be on how errors, special cases, and data comparisons are handled.

A fixed amount of wall time for an operation isn't necessarily the best defense; the attacker may be able to bypass it by crafting the input to make the operation time right at the threshold of your constant wall time increments and then see which operations push the operation over the threshold. The ideal approach in most situations is to execute the exact same code regardless of the input.

However, we don't know your exact situation and the exact information you are trying to keep confidential, so the above is generic. For example, the approach of keeping code execution the same will leak information about the quantity of data being processed and we don't know if that is a concern for you. In many cases the attacker already knows the data's size (because they supplied it), but we don't know if that's a problem here.

B-Con
  • 1,832
  • 12
  • 19