Argon2 is the winner of the Password Hashing competition, and currently recommended by OWASP for secure storage of passwords.
One crucial step of Argon2 is determining the parameters used by the function. The current IETF draft titled "The memory-hard Argon2 password hash and proof-of-work function" has a chapter named "Parameter Choice", which provides a step-by-step guide to finding the right parameters.
-
Select the type y. If you do not know the difference between them or consider side-channel attacks as a viable threat, choose Argon2id.
This step seems pretty straight-forward. The advice to use Argon2id as default unless you know that some other mode is better for your use case.
-
Figure out the maximum number h of threads that can be initiated by each call to Argon2.
This step is the first confusing one. How does one "figure out" how many threads should be used? If only one thread is used, is it unsafe?
-
Figure out the maximum amount m of memory that each call can afford.
This step seems more straight-forward again. However, if a server has 128 GB RAM and expects ~1024 concurrent logins at peak times, then ~ 128 MB is the maximum. This does not answer however, what if there would be 10 times as many users? What if 1 MB was the maximum? What if 16 KB was maximum? Which point is "too little" memory?
-
Figure out the maximum amount x of time (in seconds) that each call can afford.
This seems straightforward, higher numbers mean more cracking resistance at the cost of login speed.
-
Select the salt length. 128 bits is sufficient for all applications, but can be reduced to 64 bits in the case of space constraints.
This is very helpful, as it provides a default value that is considered sane.
-
Select the tag length. 128 bits is sufficient for most applications, including key derivation. If longer keys are needed, select longer tags.
Just like step 5, this is helpful as it provides a sane default.
-
If side-channel attacks are a viable threat, or if you're uncertain, enable the memory wiping option in the library call.
Helpful information, as it explains the threat it tries to mitigate.
-
Run the scheme of type y, memory m and h lanes and threads, using different number of passes t. Figure out the maximum t, such that the running time does not exceed x. If it exceeds x even for t = 1, reduce m accordingly.
This step is by far the most confusing. The idea is obviously to benchmark the system, to find "good" values for m, h and t, but it does not provide any priority to those parameters. Is low memory, but high parallelism and many iterations good? What about high memory and parallelism, but few iterations?
To summarize:
- What are the minimum values for parallelism and memory usage to be considered "secure"?
- What priority do memory usage, parallelism and iterations have?
- What happens if one parameter is too low, but the other two comparatively high, so as to still get an execution speed of about 1 second?