Ok, so the site generates a random password for each user at registration time. An important question is whether a user can can manually set their password later, or if they are forced to use a random site-generated password. Let's look at the two cases separately.
Random passwords
As far as I can tell, this is the scenario you are describing in the question. Unfortunately, your dev is (mostly) right. At least about single iteration of hashing vs a big slow hash. Your question kinda has the flavour of blindly applying "best practices" without considering what those practices were intended for. For a brilliant example of this, here's a good read:
The Guy Who Invented Those Annoying Password Rules Now Regrets Wasting Your Time
Suggestion
Do switch from MD5
to SHA256
, probably add a per-user salt, and maybe consider going to 32 char passwords. But adding a big slow hashing function will increase your server load for little to no added security (at least barring any other goofs in your implementation).
Understanding hashing as a brute-force mitigation
The amount of work a brute-force attacker who has stolen your database needs to do to crack password hashes is roughly:
entropy_of_password * number_of_hash_iterations * slowness_of_hash_function
where entropy_of_password
is the number of possibilities, or "guessability" of the password. So long as this "formula" is higher than 128 bits of entropy (or equivalent work factor / number of hash instructions to execute), then you're good. For user-chosen passwords, the entropy_of_password
is abysmally low, so you need lots of iterations (like 100,000) of a very slow hash function (like PBKDF2
or scrypt
) to get the work factor up.
By "20 digits hex digits" I assume you mean that there are 1620 = 280 possible passwords, which is lower than "best-practice" 2128, but unless you're a government or a bank, you probably have enough brute-force security from the entropy of the password alone.
Salts also serve no purpose here because pre-computing all the hashes is like 280 * 32 bits/hash, which is roughly 1 ZB (or 5000 x the capacity of all hard drives on the planet combined). Rainbow tables help this a bit, but quite frankly, any attacker capable of doing that, deserves to pwn all of us.
You still want to hash the password to prevent the attacker from walking away the plaintext for free, but one hash iteration is sufficient. Do switch from MD5
to SHA256
though, and maybe consider going to 32 char passwords.
Human brain passwords
Commenters on this thread seem obsessed with the idea that, despite your statement that the site generates passwords, users can in fact choose their own passwords.
As soon as the user has the possibility to change the password, the a single hash iteration is no option for storing the now low-entropy password. In this case you are correct that you need to do all the best practice things for password storage.
Salting
Either way (user-chosen or random passwords) you probably want a per-user salt.
If user-chosen, then salts are part of the best practices. 'nuff said.
If random, @GordonDavisson points out a really nice attack in comments [1], [2] based on the observation that a db lookup is essentially free compared to a hash computation. Computing a hash and comparing it against all users' hashes is essentially the same cost as comparing it against a specific user's hash. So long as you're happy getting into any account (rather than trying to crack a specific account), then the more users in the system, the more efficient the attack.
For instance, say you steal the unsalted hashed password db of a system with a million accounts (about 220). With 220 accounts, you statistically expect to get a hit in the first 260 guesses. You're still doing O(280) guesses, but O(260) hashes * O(220) db lookups ~= O(260) hashes.
Per-user salts is the only way to prevent attacking all users for the cost of attacking one user.