1

I understand that an IV (initialisation vector) should be both unique and unpredictable, but I have found little guidance as to which is most important.

For an AES cipher I have a block size of 128 bits to use. I was considering the following scheme:

Bits 0 to 39 : Least significant 40 bits of milliseconds since epoch. This repeats approximately once every 34 years. A time frame that probably exceeds the time for which current ciphers will be considered secure, and therefore unique within the life span of the encrypted data.

Bits 40 to 63 : A 24 bit counter starting at a random value and incremented for every IV. In order to guarantee uniqueness only one thread of execution can access the counter at a time, and hence the maximum number of accesses per millisecond is limited by the clock speed of a single CPU. This application will be used on normal CPUs where clock speeds are currently plateaued well below the 16GHz speed it would take to exhaust this counter within 1 millisecond.

Bits 64 to 95 : I have concatenated information about the hardware and process and then used a SHA-256 upon this data. I am using the first 32 bits of the resulting hash as a way of uniquely identifying the source process. This reduces the possibility that 2 processes on different servers could generate the same IV. The birthday paradox suggests that if I have 65000 simultaneous processes, there a 0.5 probability of two of them sharing the same 32 bits here, but with my envisaged maximum of 1000 processes, the probability is less than 0.00001.

Bits 96 to 127 : 32 random bits drawn from a secure random number generator.

This IV would be transmitted alongside the cipher text.

My first question: is this scheme reasonable for use with the block cipher modes: CBC, PCBC, CFB, OFB and CTR?

My second question: is there any advantage to passing the 128 bits through MD5 before using them? This would be to "improve" the IV by interweaving the fixed and variable parts of the input.

My third question: alas I also have to support 64 bit block ciphers such as DES, DESede and Blowfish. I don't believe any users actually use such ciphers, but they have to be supported. Such ciphers require a mere 64 bit IV. I see nothing from the above that I can reasonably remove whilst still providing reasonable guarantees of non-repeatability and non-predictability. What can one do? All I can think of doing is taking the first 64 bits of a secure hash. Does that suffice?

My fourth question: For CBC, PCBC and CFB only, if I use the first 64 bits as a true IV, and then feed the second 64 bits into the cipher as if they were the first block of the message but discarding the output - does that work as well as using a 128 bit IV even though the cipher's block size is only 64 bits?

As a further point, the initial 32 bits of the plain text of some tens of messages could reasonably be known to an attacker. They would not be able to control the IVs generated for those messages but could identify them by other means from within some tens of thousands of messages. The prime requirement of this cipher system is to prevent an attack with such information from accessing any part of the plain text of any other message. If either the IV scheme or the cipher block modes I've mentioned would be weak in such circumstance, I would be grateful if people could point it out.

Simon G.
  • 181
  • 6
  • I wonder if there any mathematical proof or statistic that covers the uniqueness and randomness of `/dev/random` or equivalent for hardware-based random number generators? – makerofthings7 Dec 01 '12 at 01:16
  • @makerofthings7: You might be interested in this paper (and references) http://eprint.iacr.org/2012/251.pdf – David Cash Dec 01 '12 at 02:06
  • 2
    The standard solution is to simply draw 128 random bits from a crypto PRNG and be done with it. – CodesInChaos Dec 01 '12 at 09:30
  • Thanks for the link. The answer appears to be don't worry about design and just use lots of random bits? My application is designed to run on server clusters. Every server is an exact duplicate of the others. There is no user input on any of them. The boot-up process is totally deterministic. Where is any entropy going to come from? It is a well known problem in such cases that /dev/random is empty and stays empty, forcing the use of the /dev/urandom. Because of this reality, I'd like to avoid a complete reliance on random numbers. – Simon G. Dec 01 '12 at 09:37
  • 2
    You seem to have the misconception that entropy is a rare resource that each generated random bit consumes. This is wrong. If your system has too little entropy, your random bits are no good, even if you don't make many of them. If your system has enough entropy, you're good for billions of billions of bits. So **make sure to provision your systems with some entropy** (if they're VMs, the host can inject entropy; how depends on your VM software and deployment method), and [read from `/dev/urandom`, not from `/dev/random`](http://security.stackexchange.com/q/3936). – Gilles 'SO- stop being evil' Dec 03 '12 at 00:37

1 Answers1

5

Each encryption mode has its own requirements for the IV.

For CBC, the IV must be random, uniform (all block values have the same probability of being selected), and unpredictable for the attacker. The latter is necessary to defeat attackers who get to choose part of the data which is to be encrypted. This is the mechanism of the BEAST attack on SSL: in SSL 3.0 and TLS 1.0, the IV for a record is the last block of the previous encrypted record, so it can be predicted by the attacker (he saw it). Note that this IV is still, from the point of view of the attacker, random and uniform (the MAC and encryption of the previous record act together as a random number generator). Unpredictability is thus crucial.

Some other modes are less picky. For instance, EAX only requires a non-repeating IV: for a given key, you should never use the same IV twice. But there is no need for randomness, and a simple counter will do the trick.

The safe IV generation method is to use a cryptographically strong PRNG (/dev/urandom, CryptGenRandom(),... all decent development platforms offer one) to produce the full IV (all 128 bits for a 128-bit block cipher like AES). This produces adequate and secure IV for all modes. You cannot get it wrong that way. CBC, PCBC, CFB and OFB require such a random, uniform and unpredictable IV.

If you want to use something else than a fully random IV, then you must take care of the precise needs of the encryption mode. Not using a random IV may have the benefit of not requiring a PRNG, which can be a nice thing on some embedded platforms which do not have a good source for random numbers. To some extent, CTR can use a predictable IV, but you must be cautious. CTR works by encrypting successive values of a counter, and the IV is just the starting point for that counter. You must never reuse the same counter value twice (with a given key), so when you use an IV for CTR encryption, you actually "burn" a range of IV values.

Your method:

  1. is overly complex,
  2. does not completely fulfill the uniformity and unpredictability requirements that most block cipher modes require,
  3. and still needs a secure PRNG to be available.

As such, it is weak and pointless. If you have a secure PRNG, just use it for the whole IV. This will give you the best you can hope for: it will be fine for 128-bit block ciphers, and mostly fine for 64-bit block ciphers (and you cannot really get anything better for 64-bit block ciphers: weaknesses implied by the short block size when you encrypt more than a few dozen gigabytes with the same key are structural). Don't throw additional algorithms and structure and other gimmicks at your protocol; in cryptography, complexity is your worst enemy.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949