2

Assume you use a proven hashing algorithm, like bcrypt with a work factor of 10 or PBKDF2 with 10K iterations. Normally that's really secure already and would give enough security already for most applications.

But then you add an extra element of randomness to it: you generate a random number X between 5 and 15, and you do the normal hashing algorithm X times and store your password once it's done. However, you don't store X anywhere. Instead, when your user logs in, you run the hashing algorithm 5 times, try it, and if it fails, you run the hashing algorithm an extra time and try again, until it works.

Since you don't store X anywhere, anyone wanting to brute-force the passwords will have to do each hashing not once, but between 5 and 15 times, and they don't know in advance how many times they'd have to iterate.

I'm not a cryptography researcher, so I'm not sure to what extent this idea is workable. A subset of users who get a high X might have their UX adversely affected, but the counter to this is that their passwords are slightly more secure. If the hashing really takes too long, you might be able to adjust the original work factor slightly.

Nzall
  • 7,313
  • 6
  • 29
  • 45
  • While I do not think you would decrease the security if you choose your number of iterations in an interval that is considered secure, it would create more load on your server while not really increasing security. If an attacker would be able to figure out the correct password matching the hash, that would already be very bad, so even increasing the work load a bit would likely be not stopping him. That is why actually hash+salt+pepper is used and unless the attacker also knows these it is nearly impossible to get it right. – John Sep 03 '15 at 22:43

2 Answers2

5

I am going to give you two answers, neither of which are information-theoretically correct, either of which should illustrate why not to bother.

  1. Entropy.
    Basically, your scheme adds an element of randomness: between 5 and 15 hash cycles. So, any random number X such that 5 <= X <= 15... in other words, X has 11 possible random values... which gives you just over 3.5 bits of entropy.
    Even if we call it 4 bits, this is really not a very substantial improvement, definitely not enough to harm UX for it.
    Ergo, leave hashing dogs lie.

  2. Attack refinement.
    In your scheme, the attacker is aware of your scheme, but simply does not know how many times to cycle the hash. In other words, if he were trying to brute force your hashes, he would first have to hash it 5 times, then hash it 6 times, then maybe hash it 7 times... until worst case he has to hash it 15 times.

    So? He'll just aim for hashing it 15 times, and check along the way to see if he could stop early.

    What did you just do?
    You gave the attacker an easy way out, if the number of cycles was anything less than the maximum. So why not just give everybody the maximum, and prevent the shortcut?


A more information-theory correct answer might be to point at Kerckhoffs's principle, which basically states that the only secret in a cryptosystem should ever be the key (or, in this case, the password). Not the choice of algorithm, implementation details, number of rounds, etc, none of these should ever be considered secret.
But I think that's just being pedantic here.


And here's another few points to think about:

Don't roll your own.
There is a reason all cryptographers are a bunch of snobs, and it's not their special brand of coffee.
Really, don't try to create your own algorithm, protocol, password scheme, or key management procedures. You won't make it better, and you will most likely make it worse.

On the other hand - DO use what is already well-vetted and considered secure by the best of those who know. Bcrypt (and it's ilk) is pretty darn strong. Want it stronger? Use a bigger work factor. That's what it's there for.

AviD
  • 72,138
  • 22
  • 136
  • 218
  • 1
    If we are to handwave things about "entropy" then we must state that though the attacker does not know _X_, the defender does not either, so, conceptually, no gain can be expected there. – Tom Leek Sep 04 '15 at 14:43
  • @TomLeek I think I was explicit enough about this being handwavey, and not strictly accurate. On the other hand, the OP did specify that the defender does not know X. – AviD Sep 04 '15 at 14:50
3

Password hashing is a trade-off: you make the function slower (through using many iterations), which makes both attacks and normal usage more expensive. You accept to spend more CPU on the task of verifying a password because you know it will also make the attacker spend more CPU on the task of cracking a password. A decent password hashing function offers a "cost" parameter that allows you to set the function slowness at the best level, i.e. the most expensive that you can tolerate, for your available resources and average load.

If you do the hashing X times, then you make usage X times slower, for both you and the attacker. In practice, this means that you must lower the iteration count of the function by a factor of X, to keep the computation within your own hardware budget, and this exactly nullifies any security gain. Thus, from that point of view, your proposed mechanism is simply neutral to security (except that it increases implementation complexity, which is, all other things being equal, a bad thing).


There still is an interesting point to consider, which is the randomness of X. Let's suppose that you generate X randomly, between 5 and 15. On average, X will be equal to 10; when validating a (correct) password, you will need to compute the hashing function about 10 times (on average). However, to decide that a password is incorrect, you have to go to 15.

There we could say: hey, that's good ! The defender (your server), under normal usage, validates mostly correct passwords (users tend to type their passwords correctly(*)), while the attacker, in his dictionary attack, will try incorrect passwords most of the time. So the attacker will spend 1.5x more CPU on each password try than the defender. We just gained a 1.5x factor over the attacker ! Isn't it swell ?

Not so. The attacker is aware of your "random X" mechanism, and will organize his attack accordingly. That is, he will hash the words in his dictionary with 5 hashing invocations, recording the hash values (in RAM -- we are only talking about millions of hash values here, or maybe a couple of billions, nothing extreme). If one of the hash values fits, that's fine, the attacker has won. Otherwise, he will compute one extra hash invocation over each recorded value, and try again. And so on.

That way, the attacker will also achieve the same kind of efficiency as the defender: he too will crack a password with a cost of (on average) 10 times the cost of invoking the hash function. Thus, there again, the security advantage of the random X is nullified.

(*) This is a rather optimistic notion.


Summary: the extra X factor, random or not, does not increase security. Implemented properly, it should not decrease it much either, but since it makes the code more complex and also makes the load less predictable, I can only advise against that extra mechanism.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475