127

What is nowadays (July 2012) the recommended number of bcrypt rounds for hashing a password for an average website (storing only name, emailaddress and home address, but no creditcard or medical information)?

In other words, what is the current capability of the bcrypt password cracking community? Several bcrypt libraries use 12 rounds (2^12 iterations) as the default setting. Is that the recommended workfactor? Would 6 rounds not be strong enough (which happens to be the limit for client-side bcrypt hashing in Javascript, see also Challenging challenge: client-side password hashing and server-side password verification)?

I have read answer https://security.stackexchange.com/a/3993/11197 which gives an in-depth discussion how to balance the various factors (albeit for PBKDF2-SHA256). However, I am looking for an actual number. A rule of thumb.

Jason Smith
  • 1,551
  • 2
  • 11
  • 12
  • 1
    I do 120.000, but it depends on your application. It just depends on your app, and your CPU power you can spend on it. E.g. if you have a 1 user login per second and using 2 cores only, you would not do more than 10.000 I think. Basically you need to check how long it takes with "time" command and see for yourself. Something close to second should be OK. – Andrew Smith Jul 14 '12 at 20:13
  • 2
    @Andrew: The speed of my own system should not be leading for the number of iterations. It is the current speed of the brute-forcers that should dictate how many iterations are considered safe. Hence my question: how many iterations are nowadays considered safe? – Jason Smith Jul 14 '12 at 21:43
  • 2
    @JasonSmith, The speed of your system *is* relevant, because it determines how many iterations you can reasonably do without bogging down your system. You want as many iterations as possible, because the reality is that no number of iterations is enough to be completely safe: we're just reducing the risk somewhat, not eliminating it. There is no realistic number of rounds that is large enough to be considered safe. If you ask a question here, please be prepared to listen to the answers you get. – D.W. Jul 15 '12 at 22:59
  • 1
    @D.W. wrote "please be prepared to listen to the answers you get", sorry if I gave the impression being pedantic or stubborn. Perhaps as a non-native English speaker my comments conveyed the wrong message. I do appreciate all answers, and try hard to understand the rationale behind them. – Jason Smith Jul 16 '12 at 12:08
  • @JasonSmith, ok, my fault for misunderstanding, sorry! – D.W. Jul 16 '12 at 17:46
  • 1
    Duplicate on StackOverflow: [Optimal bcrypt work factor](http://stackoverflow.com/q/4443476/199048) – Christian Strempfer Aug 25 '14 at 19:56
  • For anyone interested, I just wrote a small Java CLI tool to test bcrypt performance on servers (which is obviously important for balancing security, server-load and response-times): https://github.com/cdraeger/hash-performance – Blacklight Apr 04 '15 at 11:03

3 Answers3

88

Short Version

The number of iterations that gives at least 250 ms to compute

Long Version

When BCrypt was first published, in 1999, they listed their implementation's default cost factors:

  • normal user: 6
  • super user: 8

A bcrypt cost of 6 means 64 rounds (26 = 64).

They also note:

Of course, whatever cost people choose should be reevaluated from time to time

  • At the time of deployment in 1976, Unix's crypt could hash fewer than 4 passwords per second. (250 ms per password)
  • In 1977, on a VAX-11/780, crypt (MD5) could be evaluated about 3.6 times per second. (277 ms per password)

That gives you a flavor of the kind of delays that the original implementers were considering when they wrote it:

  • ~250 ms for normal users
  • ~1 second for super users.

But, of course, the longer you can stand, the better. Every BCrypt implementation I've seen used 10 as the default cost. And my implementation used that. I believe it is time for me to to increase the default cost to 12.

We've decided we want to target no less than 250ms per hash.

My desktop PC is an Intel Core i7-2700K CPU @ 3.50 GHz. I originally benchmarked a BCrypt implementation on 1/23/2014:

1/23/2014  Intel Core i7-2700K CPU @ 3.50 GHz

| Cost | Iterations        |    Duration |
|------|-------------------|-------------|
|  8   |    256 iterations |     38.2 ms | <-- minimum allowed by BCrypt
|  9   |    512 iterations |     74.8 ms |
| 10   |  1,024 iterations |    152.4 ms | <-- current default (BCRYPT_COST=10)
| 11   |  2,048 iterations |    296.6 ms |
| 12   |  4,096 iterations |    594.3 ms |
| 13   |  8,192 iterations |  1,169.5 ms |
| 14   | 16,384 iterations |  2,338.8 ms |
| 15   | 32,768 iterations |  4,656.0 ms |
| 16   | 65,536 iterations |  9,302.2 ms |

enter image description here

Future Proofing

Rather than having a fixed constant, it should be a fixed minimum.

Rather than having your password hash function be:

String HashPassword(String password)
{
   return BCrypt.HashPassword(password, BCRYPT_DEFAULT_COST);
}

it should be something like:

String HashPassword(String password)
{  
   /*
     Rather than using a fixed default cost, run a micro-benchmark
     to figure out how fast the CPU is.
     Use that to make sure that it takes **at least** 250ms to calculate
     the hash
   */
   Int32 costFactor = this.CalculateIdealCost();
   //Never use a cost lower than the default hard-coded cost
   if (costFactor < BCRYPT_DEFAULT_COST) 
      costFactor = BCRYPT_DEFAULT_COST;

   return BCrypt.HashPassword(password, costFactor);
}

Int32 CalculateIdealCost()
{
    //Benchmark using a cost of 5 (the second-lowest allowed)
    Int32 cost = 5;

    var sw = new Stopwatch();
    sw.Start();
    this.HashPassword("microbenchmark", cost);
    sw.Stop();
    
    Double durationMS = sw.Elapsed.TotalMilliseconds;

    //Increasing cost by 1 would double the run time.
    //Keep increasing cost until the estimated duration is over 250 ms
    while (durationMS < 250)
    {
       cost += 1;
       durationMS *= 2;
    }
     
    return cost;
}

And ideally this would be part of everyone's BCrypt library, so rather than relying on users of the library to periodically increase the cost, the cost periodically increases itself.

Ian Boyd
  • 2,125
  • 1
  • 21
  • 13
  • 2
    Great post, though I'd recommend against the automatic cost increase over time. Moore's law is more so about transistor density rather than CPU performance, the latter which has shown rather lacklustre growth the past few years. Also Moore's law is close to hitting a wall, where we'll need to change the foundational technology for processors to proceed growing. Nifty nonetheless! – andrewb Oct 20 '15 at 22:52
  • @andrewb You're absolutely right about using Moore's law. I've since changed it to do a microbenchmark (e.g. `BCrypt.HashPassword("benchmark", 5);`) – Ian Boyd Oct 21 '15 at 16:54
  • 2
    Uhm. Wouldn't that also *reduce* the cost if the server is under load when the microbenchmark runs? – John Morahan Oct 21 '15 at 20:28
  • 1
    @JohnMorahan No; it only *increases* the cost. See comment `//Never use a cost lower than the default hard-coded cost` – Ian Boyd Oct 22 '15 at 19:12
  • 2
    Yes, but if it's already been automatically increased from that, then returning to the hard-coded lower bound is still a decrease. – John Morahan Oct 22 '15 at 22:45
  • @JohnMorahan But i don't automatically return the hard-coded lower bound. The point is to increase the cost, not decrease it below the lower bound. It never goes below the lower bound. – Ian Boyd Oct 22 '15 at 23:59
  • 2
    Suppose your lower bound is 10, but based on the microbenchmark, HashPassword decides that, on your server right now, 13 would be more appropriate. Then you get slashdotted and everyone signs up and hammers your site at once. Now the microbenchmark runs slower and computes that say, 11 is the appropriate cost. It's not a decrease from 10, but it's a decrease from 13. – John Morahan Oct 23 '15 at 08:15
  • @JohnMorahan You are right. If you really think 13 should be the minimum, then you should periodically increase the constant (as as BCrypt spec says you should do). In the meantime, this benchmark code can only increase security. – Ian Boyd Oct 23 '15 at 17:48
  • @IanBoyd Perhaps a good way to deal with it is to log whenever an increased cost factor is determined necessary. Then if you get a significan amount of these logs, you simply update your bcrypt configuration to the new minimum – Cruncher Aug 12 '19 at 18:43
  • @Cruncher People get *sooo* uppity and pissy when apps try to use telemetry to make things better. We have to resort to doing the best we can. The reality is that every developer ever will have *no* idea what it mean, what to do about it, what the risks are, or even how to go about it. *(Lets have a CAB meeting, we need to get all the stakeholders, what are the implications, what is the disaster recovery, how much downtime)* **`shoots self in head`**. So it will go unfixed for years. The code just has to do the right thing. *(aka "the pit of success")* – Ian Boyd Aug 13 '19 at 16:03
  • Honestly, I think this is really a good idea from a practical standpoint and perhaps best effort in future-proofing. With respect to mitigating concurrent load (i.e. skewed results), you might consider just [revising the application's init/runtime configuration](https://docs.microsoft.com/en-us/dotnet/api/system.configuration.configuration.save?view=dotnet-plat-ext-3.1) or maintain in a nearby cache file (but keep an inline, hard-coded minimum anyways in case something happens to that file) with any new/higher work factor values discovered, discarding previously attained benchmarks. – Matt Borja May 21 '20 at 14:34
  • @MattBorja Those are all good ideas; but depend on the specific environment. The same library class has to support running on a desktop PC as a standard user, as well as running as a least-priviliege service, as well as on something like Azure with no filesystem, ini-file, `.config`, or registry. Certainly the Bcrypt library can expose a public `BCrypt.GetIdealCost()`, and let the developer decide when and how often to update it. But the problem still happens that developers shouldn't **have** to deal with this; because we all know that they simply won't. – Ian Boyd May 21 '20 at 19:17
  • @IanBoyd Levels of abstraction dictate what developers should/shouldn't have to deal with, my friend. We like frameworks for the same reason we like automatic transmissions: they do much of the thinking for us. But then we send people off to training (i.e. college, trade, etc.) because we *need* people to also think for themselves. For today, we're talking principally about maintaining work factors, so other viable, non-file based solutions may also include in-memory caching (whether that's statics, Application variables, Session, etc.). Surely that would alleviate some platform dependency. – Matt Borja May 24 '20 at 07:30
62

I think the answer to all of your questions is already contained in Thomas Pornin's answer. You linked to it, so you presumably know about it, but I suggest that you read it again.

The basic principles are: don't choose a number of rounds; instead, choose the amount of time password verification will take on your server, then calculate the number of rounds based upon that. You want verification to take as long as you can stand.

For some examples of concrete numbers, see Thomas Pornin's answer. He suggests a reasonable goal would be for password verification/hashing to take 241 milliseconds per password. (Note: Thomas initially wrote "8 milliseconds", which is wrong -- this is the figure for a patience of one day instead of one month.) That still lets your server verify 4 passwords per second (more if you can do it in parallel). Thomas estimates that, if this is your goal, about 20,000 rounds is in the right ballpark.

However, the optimal number of rounds will change with your processor. Ideally, you would benchmark how long it takes on your processor and choose the number accordingly. This doesn't take that long; so for best results, just whip up the script and work out how many rounds are needed to ensure that password hashing takes about 240 milliseconds on your server (or longer, if you can bear it).

Wolfgang
  • 253
  • 1
  • 4
D.W.
  • 98,420
  • 30
  • 267
  • 572
  • 7
    Time doesn't matter, money is why the world spins. – rook Jul 15 '12 at 23:39
  • I have a hard time understanding the reasoning that *my* platform determines the amount of hashing rounds. A client-side Javascript bcrypt implementation can do about 2^6 rounds on a legacy mobile device, my most recent hardware can do 2^13 rounds. However, you commented elsewhere that "12 rounds is almost certainly not enough". How can the speed of *my* implementation be relevent? Perhaps this just means I need to buy faster hardware and can't run a secure website on an old Pentium 4 which does 2^4 rounds? – Jason Smith Jul 16 '12 at 09:00
  • If 12 rounds is the best you can do, go with that. The summary version is that you want to use as many rounds as you can tolerate (for any reasonable number of rounds, even more will be even better). Therefore, the speed of your implementation is relevant, because it places an upper bound on the number of rounds you can use (if you set the number of rounds ridiculously large, it'll take too long). I recommend setting the number of rounds as close to that upper bound as possible. That's why the speed of your implementation is relevant. – D.W. Jul 16 '12 at 17:47
  • 3
    P.S. I do not recommend implementing bcrypt in Javascript, as performance will likely be very poor. I assume you are computing bcrypt on the server. (I do not think there is sufficient value to computing bcrypt on the client, so I do not recommend computing bcrypt on the client.) I suggest using a native, optimized implementation of bcrypt; that will run much faster. – D.W. Jul 16 '12 at 17:49
  • 1
    there should be a number somewhere. How many rounds can our system tolerate is not relevant, what is relevant is how difficult is it for the NSA to crack your password if it has been through only 10 rounds of bcrypt (the default in many systems). And I'm saying NSA because I don't think there are other groups that can provide the same amount of computation power. – David 天宇 Wong Feb 07 '15 at 10:36
  • 4
    @David天宇Wong, the number of rounds our system can bear *is* relevant. There is no practical number of rounds that was enough to make it impossible for NSA to crack any passwords (even weak ones). So you can never get 100% security -- it's always a tradeoff between how much harder you are making it for yourself, vs how much harder you are making it for the adversary. But given the realities of how users choose passwords, we can never make it as hard as we would really like for the adversary. This is risk mitigation, not risk elimination. – D.W. Feb 07 '15 at 20:52
  • @D.W., I don't get it. Each round will increase the complexity of the brute force, at one point it will be unfeasable for any machine/group of machine on earth to crack that, even the NSA's. – David 天宇 Wong Feb 08 '15 at 01:17
  • 11
    @David天宇Wong, a non-trivial number of people choose a weak password (say, 10 bits of entropy). We'd need, say, 2^70 iterations to make that secure against the NSA. If you did 2^70 iterations of bcrypt, no one could use it in practice, because it would be far too slow for the good guys. That's why I say there is no value for the number of iterations that both provides strong security against strong adversaries (like the NSA), and yet is also small enough to be practical for the legitimate users of the system. Anyway, we've gotten a bit far afield from the question that was asked. – D.W. Feb 08 '15 at 01:21
  • I was not thinking of this usercase, of course you will never be able to protect weak passwords but you should think about the number of iterations to protect normal passwords. What do you mean 10 bits of entropy for a pw? I guess entropy of the password is 2^10 but what does it translates to? – David 天宇 Wong Feb 08 '15 at 01:39
  • 4
    @David天宇Wong, a password with 10 bits of entropy *is* normal. But if you don't like that, consider a password with 20 bits of entropy. (More than half of all users have a password with fewer than 20 bits of entropy.) Then you'll need 2^60 iterations of bcrypt to provide strong security for those passwords, but that's far too many for the good guys to do. Try working through this on your own. There are lots of resources about what entropy is -- this is not the place to explain/ask about it. Entropy is a fundamental concept that you have to understand, to understand password security. – D.W. Feb 08 '15 at 01:51
5

Stronger Key Derivation via Sequential Memory-Hard Functions is a very good paper on the topic of key stretching. On page 14 it compares various hashing algorithms with how much money it will cost to break the hash, which is a useful way of thinking about these things. (On a side note ChromeOS uses Scrypt if TPM isn't available.)

The idea is that you want these password hashes to be unbroken for as long as possible. With Moore's law this is a exponentially fast moving target. Scrypt uses variable amount of memory and cpu, this variable could become heavier as a function of time. In that each time the client logs in you can update the password hash to be more secure. In the case of PBKDF2 this could look like rounds=2^(current_year-2000) or something like that.

Its important to note that you can't just offload this processing onto the client and expect your protocol to be secure. All client-side hashing authentication protocols I know of require the server to make an identical calculation in order to verify the authentication credentials (NTLM, NTLMv2, SRP, WPA-PSK...).

Tobias Kienzler
  • 7,578
  • 10
  • 43
  • 66
rook
  • 46,916
  • 10
  • 92
  • 181
  • 1
    I am aware I should match Moore's law. But that is exactly my question: how many iterations are nowadays considered safe given the current speed of the brute-forcers? – Jason Smith Jul 14 '12 at 21:47
  • 1
    @Jason Smith I don't think anyone can give you a real number because it is circumstantial. My answer was 2^12, which is 4096 and my rationale is because its 2012. – rook Jul 14 '12 at 22:02
  • 1
    Ah, I thought 2^(current_year-2000) was just an arbitrary example. Ok, so 12 rounds in 2012. – Jason Smith Jul 15 '12 at 10:44
  • 1
    No, 12 rounds is almost certainly not enough. – D.W. Jul 15 '12 at 22:59
  • 2
    @D.W. 4096 iterations is larger than most implementations i have seen, keep in mind next year it will be 8192... – rook Jul 15 '12 at 23:42
  • 4
    @Rook, I'm with you! I was merely pointing out that Jason Smith seemed to be confused about the difference between 12 rounds vs 2^12 rounds. – D.W. Feb 07 '15 at 20:50