6

All modern password hashing schemes are deliberately designed to include a huge amount of "busy-work", to limit the speed with which an attacker would be able to conduct password hashing attempts. Further, a goal in newer schemes is to reduce the amount by which special-purpose hardware can be made more efficient than commodity hardware of similar cost. If a "unit of work" in each of several algorithms is defined as the amount of work that could be done per second per dollar of hardware that was optimized for that task, the goal is to maximize the number of work units that can be done in an acceptable timeframe using production hardware. Because commodity machines differ in how efficiently they can perform various operations, however, this would suggest that different algorithms would be optimal on different production machines. For example, while it's desirable for algorithms to be GPU-hostile if production machines won't have GPUs, on production platforms which can use a GPU it would be better to have a GPU-friendly busywork algorithm.

Given the password hashing algorithm:

hash = secureHash(password, salt)
Using one or more forms of busywork:
  busyResult = busyWork(hash, params)
  hash = secureHash(hash, busyResult, extraSalt)

If one assumes that secureHash has all the properties necessary for a secure hash, would overall security be reliant upon any cryptographic properties of the busyWork functions used beyond the fact that their outputs not be computable without performing a certain amount of work? If not, would it be reasonable to favor different kinds of busy-work on different production environments, if:

  1. Each password record included a list of the forms of busy-work that were used to create it, along with the appropriate parameters.

  2. Each production environment would be capable of performing all forms of busy work, even if not optimally efficiently.

  3. In the event of hardware changes that would cause the current password algorithm to take longer than ideal, a user's stored password hash could be transparently updated to use the new algorithm on the user's next login (the user need not know this was taking place).

Trying to design a secure hashing algorithm for every different flavor of system which was optimized around that system's precise abilities would be difficult. If, however, the requirement was weaker, and it was only necessary to show that the busywork algorithm was close to the fastest means of producing a certain output on a given system, and that every different hash value fed into the algorithm would be likely to produce different outputs (not hard to do if the output could be much larger than the input), it would seem that producing algorithms optimized for different platforms would be much easier.

Main question: If the secureHash function is good, and the time required for at least some of the busy-work functions cannot be effectively reduced, would there be any way in which a poor busy-work function could compromise security, other than by taking time away from the better busy-work functions?

supercat
  • 2,029
  • 10
  • 10

2 Answers2

2

Does password-hashing “busy-work” need to be cryptographically secure

Yes. You need it to use a standard algorothm to make sure there are no shortcuts available for the "busy-work". Have a look at Thomas Pornin's answer here.

Inserting the salt (yes, it must be talked about), and iterating the function at the same time, is a bit more tricky than what it usually appears. In particular, a hash function such as SHA-256 is not exactly a "random-oracle-like" function; it exhibits some internal structure. Any homemade construct could hit one of those fine details from which deadly weaknesses may emerge.

Making sure that you did it right is difficult, just like building any cryptographic algorithm. That's where bcrypt is better than any homemade construct: bcrypt has been published and in use and presumably inspected for flaws by many people over quite some time. This is basically the only hard measure of security that you can get in cryptography. The generic advice of "do not define your own algorithms" applies here too.

In your statement:

If the secureHash function is good, and the time required for at least some of the busy-work functions cannot be effectively reduced, would there be any way in which a poor busy-work function could compromise security, other than by taking time away from the better busy-work functions?

Taking time away is good, although if shortcuts exist then the time spent would be better spent on a "tried and tested" method of iterating the hashing algorithm. If the tried and tested method will effectively add 10 bits of entropy with 1024 iterations, you do not want your "poor busy-work" function to be able to be shortcut so no additional equivalent entropy is added. This adds time for your system, but none for an attacker. A good iterated hash function should add more time for the attacker as they will need more guesses than a valid user will.

I realise that your question is along the lines of "assume the attacker has the fastest way of producing a hash", however on my production hardware I want "the fastest method of creating a secure hash" to be used for each platform. In that case you could have different algorithms for different machines, but you should use standard ones (bcrypt, scrypt, PBKDF2) and you will be at the mercy of the weakest one if an attacker got hold of your password database.

In your comment to a previous answer:

In most crypto functions, the assumption is that the attacker won't have the parameters necessary to use the crypto function in the forward direction and will thus have to reverse it and the goal is to maximize the difference in speed between the reverse and forward directions. Adding insecure stuff will slow down the forward direction, but probably won't affect the reverse direction as much.

A cryptographic hash function should be pretty much impossible to invert. An attacker when trying to crack a hash will in fact hash their guesses using the same algorithm you have used - so they are going in the "forward direction" to see if they have a match. You should not assume that your "busy-work" or hashing algorithm is secret - assume the only secret the attacker does not know is the password. The only time things are in a "reverse direction" is when they have a pre-computed rainbow or hash table - if you have a strong enough salt then this will stop this attack vector.

So it appears there is a flaw in your goal of making a hash function quick to run yet difficult to reverse. It should be slow to run and impossible to reverse. The only "reversing" you should be attempting to thwart is those of repeated attempts by making the attacker do more work per guess.

SilverlightFox
  • 33,408
  • 6
  • 67
  • 178
  • I was thinking of algorithms like "use a cryptographic hash+PRNG to populate a large matrix, perform some expensive operations on it, hash the result, and iterate", with the bulk of the time being spent on the matrix operations. For commonly-used matrix operations, I would expect most of the shortcuts that would be usable on random data have already been found and are in use. Also, what do you think of the idea that using 20% of one's time budget on each of five independent algorithms that would have a 5% chance of being cracked would pose less risk of a more-than-an-order-of-magnitude... – supercat Jan 17 '15 at 18:41
  • ...improvement, than would a single algorithm which posed a one-in-a-million risk of an eleven-fold cost/speed improvement? Obviously if the former algorithms' risks aren't independent the risk could be much higher, but on the flip side it's generally hard to be really sure that an algorithm won't have a shortcut that would allow an eleven-fold cost/speed inprovement. – supercat Jan 17 '15 at 18:45
  • @supercat `hard to be really sure that an algorithm won't have a shortcut that would allow an eleven-fold cost/speed improvement` - this is why I suggest using a standard algorithm that has been scrutinised and accepted as being secure. I can't see the advantage in using five different algorithms instead of one secure one. – SilverlightFox Jan 17 '15 at 18:55
  • An attacker willing to design custom hardware is apt to achieve some improvements in cost/speed ratio; the only question is how much. As for using small algorithms, the idea is that even if an attacker can cut the time for four of the algorithms by 99.999% and one by 50%, the overall speedup would be less than 10x. By contrast, an attacker who can design hardware for an 11x speedup will have an 11x speedup. – supercat Jan 17 '15 at 19:54
  • 1
    Also, I would expect that the amount of research that has been put into improving the cost/speed ratio of certain non-cryptographic algorithms exceeds that spent on cryptographic ones. If companies spend millions of dollars trying to develop the most efficient hardware to perform some algorithm (as is the case with many graphics algorithms), it seems likely that they're going to be much closer to the optimal efficiency an attacker could achieve, than if nobody (other than an attacker) invests any significant money on hardware optimizations. – supercat Jan 17 '15 at 20:21
1

It's largely a matter of semantics… Crypto is about making sure the time required to break it (by guessing some value) is as large as possible. "Taking time away" means that the attacker can break your security more quickly. — That is, all crypto in actual use can be "broken" given infinite time (eg: "simply" guess a private key).

If your "poor busywork" is simply "add 10 to the number X a billion times", the attacker will reduce it to "add 10 billion to X", removing all usefulness.

Joel L
  • 1,427
  • 11
  • 12
  • Your example of "poor busywork" would weaken security by my aforementioned "taking time away from better busy-work functions". If one has four busywork functions, all of which take the same time to execute, and a means is found of reducing by 99.9999% the time required for *one* of them but the others cannot be significantly reduced, and if the busywork functions are designed independent of the secure hash functions, is there any way in which the poor busywork function could reduce an attacker's overall time by more than 25%? – supercat Dec 01 '14 at 15:30
  • 1
    Yes, but there's also this general logic: "the simpler the system, the more secure it is". Because not having code means that you can't have bugs in that code (if it doesn't exist). If you re-wrap "secure stuff" in "insecure stuff", it doesn't generally reduce the security of the secure part, but it's even better to not have the insecure stuff in the first place. – Joel L Dec 01 '14 at 15:34
  • The simpler the system, the greater the performance benefits available (to an attacker) by using custom hardware. Further, the goal of the busywork here is rather different from normal cryptography. In most crypto functions, the assumption is that the attacker won't have the parameters necessary to use the crypto function in the forward direction and will thus have to reverse it and the goal is to maximize the difference in speed between the reverse and forward directions. Adding insecure stuff will slow down the forward direction, but probably won't affect the reverse direction as much. – supercat Dec 01 '14 at 15:45
  • Here, the goal is to increase the work required for each million passwords to be run in the *forward* direction; if the primary hashing function is good, that should suffice to make attacks in the reverse direction impractical even if the busywork functions add no additional difficulty there. I would expect that proving that computation of some "formula" will require a certain amount of work is probably much easier than proving that computing the inverse of a function will require much more work than computing the formula itself. – supercat Dec 01 '14 at 17:41