13

Argon2 is the winner of the Password Hashing competition, and currently recommended by OWASP for secure storage of passwords.

One crucial step of Argon2 is determining the parameters used by the function. The current IETF draft titled "The memory-hard Argon2 password hash and proof-of-work function" has a chapter named "Parameter Choice", which provides a step-by-step guide to finding the right parameters.

  1. Select the type y. If you do not know the difference between them or consider side-channel attacks as a viable threat, choose Argon2id.

    This step seems pretty straight-forward. The advice to use Argon2id as default unless you know that some other mode is better for your use case.

  2. Figure out the maximum number h of threads that can be initiated by each call to Argon2.

    This step is the first confusing one. How does one "figure out" how many threads should be used? If only one thread is used, is it unsafe?

  3. Figure out the maximum amount m of memory that each call can afford.

    This step seems more straight-forward again. However, if a server has 128 GB RAM and expects ~1024 concurrent logins at peak times, then ~ 128 MB is the maximum. This does not answer however, what if there would be 10 times as many users? What if 1 MB was the maximum? What if 16 KB was maximum? Which point is "too little" memory?

  4. Figure out the maximum amount x of time (in seconds) that each call can afford.

    This seems straightforward, higher numbers mean more cracking resistance at the cost of login speed.

  5. Select the salt length. 128 bits is sufficient for all applications, but can be reduced to 64 bits in the case of space constraints.

    This is very helpful, as it provides a default value that is considered sane.

  6. Select the tag length. 128 bits is sufficient for most applications, including key derivation. If longer keys are needed, select longer tags.

    Just like step 5, this is helpful as it provides a sane default.

  7. If side-channel attacks are a viable threat, or if you're uncertain, enable the memory wiping option in the library call.

    Helpful information, as it explains the threat it tries to mitigate.

  8. Run the scheme of type y, memory m and h lanes and threads, using different number of passes t. Figure out the maximum t, such that the running time does not exceed x. If it exceeds x even for t = 1, reduce m accordingly.

    This step is by far the most confusing. The idea is obviously to benchmark the system, to find "good" values for m, h and t, but it does not provide any priority to those parameters. Is low memory, but high parallelism and many iterations good? What about high memory and parallelism, but few iterations?


To summarize:

  • What are the minimum values for parallelism and memory usage to be considered "secure"?
  • What priority do memory usage, parallelism and iterations have?
  • What happens if one parameter is too low, but the other two comparatively high, so as to still get an execution speed of about 1 second?

1 Answers1

8

The Argon2 spec gives very detailed information.

How does one "figure out" how many threads should be used? If only one thread is used, is it unsafe?

The maximum number of threads you want is the number of hardware threads on your system. A 4 physical core Intel CPU Hyper-Threading to give it 8 logical cores should be be running Argon2 with 8 threads. If you use only one thread, then you will not be making optimal use of your hardware. This isn't inherently unsafe, but it does mean that the KDF will take longer to get the same amount of work done. If you have a system with a lot of hardware threads, you might just want to benchmark it.

This does not answer however, what if there would be 10 times as many users? What if 1 MB was the maximum? What if 16 KB was maximum? Which point is "too little" memory?

There's no effective lower limit beyond the limits imposed by the specification. You can bring it as low as you want, but remember that a smaller amount of memory means an attacker will be able to more easily utilize high-speed specialized processors like ASICs. The more memory you use, the more expensive it is to attack. You want to use the maximum possible. If you are forced to use very little memory due to resource constraints, there's nothing you can do about it.

This step is by far the most confusing. The idea is obviously to benchmark the system, to find "good" values for m, h and t, but it does not provide any priority to those parameters. Is low memory, but high parallelism and many iterations good? What about high memory and parallelism, but few iterations?

Parallelism is actually used to fill memory more quickly. The more hardware threads you use, the faster the memory can be filled and operated on. As for memory and iteration usage, that represents a tradeoff. With all memory-hard algorithms, an attacker can utilize a time-memory tradeoff (TMTO) attack where they can get away with using less memory than you used, but at the expense of greatly increased computation requirements. The more iterations you use, the more computations an attacker will need to perform to reduce the amount of memory they need to use. The exact optimal values depend entirely on the economics of the attack itself (e.g. it's possible to get around 16 GiB of ultra-high speed and low-latency memory using TSVs on sapphire substrate, but is that more or less expensive for a given attacker than adding more computation units to perform a TMTO attack?).

Related answers on our sister site, Cryptography (and the argon2 tag), which help explain more:

forest
  • 64,616
  • 20
  • 206
  • 257