60

Most sites & software seem to have a default of auto lock or time lock after 3 wrong tries.

I feel that the number could be much higher - not allowing retries is mainly to prevent automated brute force attacks, I think. The likelihood of a brute force attack getting the password right in 4 retries is almost the same as getting it in 3 retries - i.e. very very small. I think this can be kept much higher without compromising security.

I know there are other strategies like increasing time to retry after each retry - but I am asking about a simple strategy like locking after "n" attempts - what's a good maximum "n"?

user93353
  • 1,982
  • 3
  • 19
  • 33
  • 1
    This really is up to the user experience you want to provide. Do your users expect high security of their accounts or easy logins. We can't calculate this for you and even on the security side, it depends on a lot of factors. For instance, what's your justification for not locking out after 1 failed attempt? Once you clearly understand that for yourself, then the justification for a higher number becomes clearer. – schroeder Oct 09 '16 at 13:31
  • 15
    I wouldn't limit tries per account for a typical internet application at all, the risk of an attacker locking out the real user is too severe. I'd go with account based CAPTCHAs and IP based locking. Though one-time-tokens sent via email might be a work-around to allow the legitimate user to login even during an attack. – CodesInChaos Oct 09 '16 at 13:31
  • 1
    @CodesInChaos: Failing to limit login attempts is a massive security risk! Instead you should mitigate the support overhead by adding a 2nd factor auth and/or increasing delays between logins. CAPTCHAs can be used as a 2nd factor but they are a horrible user experience for general use and likely to be a real turn-off for customers. – Julian Knight Oct 09 '16 at 13:37
  • @JulianKnight Having to answer CAPTCHAs while your account is under attack is certainly less annoying than not being able to log in at all. Per account delays behave like a per-account lockout, preventing the legitimate user from logging in. 2nd factor is nice, but few websites have the leverage to force the user to use it. – CodesInChaos Oct 09 '16 at 13:41
  • @CodesInChaos: you seem focussed on one issue out of several. DOS due to someone else bombing your login is only one attack scenario and, I would argue, not the most common. Simple attempts to break in are a **far** more common threat. Failure to restrict failed login attempts is one of the first things you will be pulled up on in a security audit (e.g. a Pen test or IT Health Check). It is **very** dangerous. – Julian Knight Oct 09 '16 at 13:48
  • 9
    The PCI requirement is 5 – Neil McGuigan Oct 09 '16 at 20:23
  • just as a side note: even three attempts can be a lot, if the attacker can try the passwords on *every* account. Say 1% of getting the password correct in the first three attempts and trying 1000 accounts -> 10 successful logins (and 990 people locked out of their accounts). Limiting the attempts per IP might therefore be wise (regardless of other mechanisms) – Lukas Oct 09 '16 at 20:59
  • 1
    As others have noted there are too ways of guessing password: trying all the password for a single user or trying the same passwords for all users. By having 4 tries before the block/delay you allow 1 more guess *per user*, which means millions of extra tries in a big application. – Bakuriu Oct 10 '16 at 07:09
  • Personally I reckon that somewhere up to 10 works for most Alphanumeric (so not a 4 digit PIN) setups. Realistically if a user can't remember their password after 10 attempts, they've forgotten it, and if an attacker can guess the password under that the user has a really bad password... – Rory McCune Oct 10 '16 at 08:32
  • This depends on your password requirements. The more complex and unique the requirements, the less likely the user will remember your password. You could make your user experience bad if people can't even get into your service and make it prone to social engineering as mentioned in the top answer. – HopefullyHelpful Oct 10 '16 at 15:04
  • 3
    "Most sites & software seem to have a default of auto lock or time lock after 3 wrong tries." Citation needed. I *seriously* doubt most major sites lock your account this quickly. I know for a fact Google does not (or at least didn't a few months ago). – jpmc26 Oct 10 '16 at 15:33
  • 1
    Once upon a time, I had about a dozen devices connected to my corporate Exchange account, and every time I changed my password (required every 45 days, which is a whole other discussion of why that's terrible), all of my devices would try to reauthenticate within a minute's span and end up locking my account. Eventually I had to build a procedure of disabling the devices, changing my password, then powering them all on one by one as I fixed the password on each one. But slowly enough to not trip the auto-lockout. IMO, automatically locking accounts based purely on attempt count is bad UX. – fluffy Oct 10 '16 at 23:55
  • For a single username or a single IP source, infinity tries, spaced by say 3 seconds is far more than needed to get rid of automated attacks. The "3tries" lockout was and is a stupid rule someone created without any thought. – Carl Witthoft Oct 11 '16 at 19:53
  • @jpmc26 This is certainly true for most banks, although many of them have a lot of other dubious password restrictions due to legacy code: things like limiting the character set and password length are not at all uncommon. – jpaugh Oct 11 '16 at 21:41
  • @jpaugh - yes, all banks where I have accounts lock after 3,4 tries. – user93353 Oct 12 '16 at 02:53

6 Answers6

91

Unless you have separate means of restricting access to the login form itself, a good baseline is don't have a hard limit. That's because it's way too easy for someone to be completely locked out of their account.

This is bad because of the denial of service, obviously, but it's also a security concern in itself. It will increase support requests from people asking for their accounts to be unlocked, and the people doing the unlocking will become habituated, and social engineering attacks starting with "hey, my account is locked" become that much easier.

Instead, extend timeouts — but not infinitely; just enough to restrict the number of guesses to a reasonable amount over time given your password complexity requirements.

mattdm
  • 2,731
  • 1
  • 15
  • 17
  • Agreed. A good idea aswell is to also have dynamic password policies *AND* dynamic restrictions depending on account. So for example a user account could have a system that would incur a wait time of like 15 minutes over 90 retries (90th retry = 15 minutes wait, then every retry over that will incur another 15 minute), but also allow the locktime to be bypassed by a captcha. But a admin or high privilege account could have like 24 hour lock after the 10th retry, and no captcha bypass. (rather, captcha PLUS wait time is required). – sebastian nielsen Oct 10 '16 at 00:07
  • For accounts incorporation some sort of stored value, licenses, products or anything of monetary value, a good idea is to calculate the monetary worth of the account and then incur limitations based on this. So a worthless account of like 5€ has very high retry limits, while a high value one with like 500€ will locktime very soon and with pretty long limits. But design the system well so a attacker can't use it to poll the valueness of the accounts. This can be used by some fuzz value that will randomize the timelock a bit so a attacker must use a great deal of retries to find out the value. – sebastian nielsen Oct 10 '16 at 00:10
  • This doesn't answer the question (the OP said: auto lock *or* time lock). – kubanczyk Oct 10 '16 at 09:57
  • 1
    "Instead, extend timeouts — but not infinitely" -- implementing the timeout as a token bucket will make it look a bit like a finite limit with a lockout, except that you don't really get locked out, you just have to wait for the token interval. I suspect (but have no proof) that the number 3 comes about because that's one attempt to type the password wrong (typo or mis-remembered), followed by one repeat attempt, followed by one very careful attempt. This "should" be fast. If you still can't get it right after that, chances are something is seriously wrong (e.g. you don't know it). – Steve Jessop Oct 10 '16 at 15:00
  • @sebastiannielsen If a user's password is in the 200 most common passwords, I get 2€. If it's in the 100 most common, I get 5€. If it's in the 50 most common, I get 8€. If it's in the 10 most common, I get 50€... If I were a cracker (criminal hacker), I'd know whether I'd run a bot through a load of accounts for a frequent low-payout (it's practically a low-risk lottery that you can actually get *more money* out of than your costs!). – wizzwizz4 Oct 10 '16 at 19:29
  • @sebastiannielsen Which means I can wreak havoc by having my bot do ten wrong guesses for all accounts I think might be admin. (or just all accounts, since the cost is cheap.) I won't get access, but neither will legit admins. :\ – Kathy Oct 10 '16 at 20:17
  • "Instead, extend timeouts — but not infinitely" - I am technically locked of my corporate computer because they implemented quadratic timeouts and I cannot keep trying variation the vague idea of what I put. – durum Oct 12 '16 at 09:57
  • 2
    @durum Yeah, "quadratic" becomes "infinite" for all practical purposes _way too fast_. – mattdm Oct 12 '16 at 13:56
20

In my opinion, this can't be answered in general. It depends on your requirements on usability, security and other parameters.

One of the main factors is how big the range of possible passwords is in your specific scenario. Assuming an attacker has no further information about the password and must brute force the right combination he has a chance of 1 to the number of possible password combinations to guess the specific password right.

For example with a 4 digit PIN number there are 10,000 possible combinations (10^4). The chance of guessing a specific combination with one attempt is 1 to 10,000 or 0.01%. Allowing two attempts doubles the chance of guessing it right to 0.02%.

It's up to you to find the right tradeoff for your specific scenario. But keep in mind that brute forcing is not the only attacking method you may have to consider. Some attackers may have additional information about the target and can therefore improve the chance of guessing right by trying a more likely combination (eg. if a targeted person uses personal information within its password and the attacker knows about it).

S.L. Barth
  • 5,486
  • 8
  • 38
  • 47
jojoob
  • 466
  • 3
  • 5
  • 1
    Actually it's _exactly_ 0.0002 (or 0.02%). If the attacker uniformly randomly chose their second guess, and thus had some chance of repeating the first guess, their chance of guessing right in two tries would be 0.00019999, but in reality, since they can rule out the first combination guessed, their chance is a little bit higher than that. Specifically, 0.00000001 higher, which brings it up to 0.0002 exactly. I think that's the increase you were thinking of. – David Z Oct 09 '16 at 18:00
  • Hm, I thought I have a chance of 0.0001 guessing right with the first try. And for the second try I have a chance of 1 to 9,999 = 0.00010001... and in sum it is 0.020001. Isn't this right? I'm not that trained in probability calculation. – jojoob Oct 09 '16 at 19:39
  • 4
    @StackTracer He's not. With 4 possible combinations, your chance of guessing right is 25% on the first try, and 50% on the second try (since you've tried half of the possibilities). The chance of guessing the password **exactly** on the second try **after** a first fail is 33%, but it doesn't matter in the whole picture. – Blackhole Oct 09 '16 at 20:56
  • @jojoob According to your maths, they have a 100.01% chance of correctly guessing the password if they try 6322 passwords. And a 978.75% chance if they try all 10000. – user253751 Oct 09 '16 at 21:06
  • 8
    @jojoob, 9999/10000 chance to fail on the first try, 9998/9999 to fail on the second try. Multiply them together to get a total of 9998/10000 chance to fail after two tries, or 2/10000 = 0.0002 = 0.02 % to succeed after two tries. – ilkkachu Oct 09 '16 at 21:28
  • @ilkkachu thank you for the boost. I will edit the answer... – jojoob Oct 09 '16 at 21:49
  • @ilkkachu do you mean a 0.02% chance of success in 2 or fewer tries? Assume the chance of failure is either 0 or 1 once a single success has been achieved (0 if you remember to use the successful pin, 1 if you keep trying every other pin). – Gary Oct 11 '16 at 18:16
6

The number that is considered "safe" is fairly arbitrary because the risk is based on the value of the data, the level of allowed and enforced complexity, min/max password length and possibly other security measures you may have implemented.

The typical number is anywhere between 3 and 10. If you implement increasing timespans between unsuccessful attempts, you can go towards the higher end but only if you allow/encourage or enforce relatively high password length & complexity.

What you need to remember is that most people don't come up with random passwords. In the UK for example, the most common pin is 1966 and another common one is 1066 - both famous dates from history. There's more to chose from in a word of course but people still often end up with common words. So allowing 4 guesses on a short password is more effective than you might think, especially if your system allows further attempts after a timeout.

Of course, on many sites this might not matter that much if the risk and data sensitivity is low.

In addition to extending timeouts, another good way to improve security is to require a second factor of authentication after a few failed attempts. Often sending a code to email or phone.

You could also prevent login attempts from different networks in short succession and especially from different localities. It is also a good idea to log and audit failed login attempts.

These strategies help avoid high levels of customer support calls whilst minimising risk.

Julian Knight
  • 7,092
  • 17
  • 23
  • 1
    My bank (isbank) doesn't allow passwords that start with 19. – ave Oct 11 '16 at 07:50
  • 1
    "a good idea to log and audit failed login" - Not sure if you were intending to log failed _passwords_ as well? Word of warning... if failed passwords were logged and the log was hacked then the hacker would potentially gain access to a lot of password "clues" from legitimate users that had simply misstyped a char or two in their password. – MrWhite Oct 11 '16 at 09:21
  • 2
    No, generally you should not record even failed passwords as this can have unintended side-effects such as showing up *nearly* correct passwords. Not good. That can be a useful exercise but only if done very carefully and the you need to very carefully secure and handle the output, putting it into general logs is not a good idea. – Julian Knight Oct 11 '16 at 11:57
6

Problem

Initial note: I'm biased towards web servers, yet many of what is said here will apply to other kinds of services.

The problem is Denial of Service. It can happen two ways: 1) Attackers runs brute-force in such a way that it ends up saturating the server, and now nobody can access the service. 2) Users (malicious or not) may try too many times, leading to the access being locked.

Other considerations

  1. Telling the user if the error is in the user name or the password will allow an attacker to brute-force/dictionary user names. Although this goes against usability, you should opt in the side of security if the accounts are meant to private or anonymous by default※.

  2. Telling the user that the account is locked will make it easier for attackers to cause problems by locking many accounts (which is a form of denial of service, and it will probably lead to many support tickets). Also, accounts that don’t exist can't be locked, allowing the attackers to discover accounts by this method. You may consider mocking lock on fake accounts.

  3. Discovering accounts is not only half the brute-force/dictionary attack, but it can also be useful in future social engineering.

  4. A variant of brute-force/dictionary attack is to try the same password (usually a statistically common one) against a large number of user names.

※: Will search engines be able to index user names? If they will, opt on usability (attackers have a list of valid user names in search engine cache anyway). Avoid allowing search engines to access such information in sites where knowing who has an account can be considered sensitive information.

Possible solutions

There are a few common things to try to solve the problem:

  • Add a CAPTCHA
  • Add a retry time
  • Lock the account
  • Lock the origin
  • Two-factor authentication

It should be noted that only locking the IP at firewall level or web server configuration level will have a real impact in server load. Yet, if you are only locking the origin when paired with the given account, the logic will be in server side code. It is also true for the rest of sulutions that they require server side code.

These solutions that rely on server side code and so it will not really protect the server from a flood attack. This means that the main application of these methods is as deterrent.


Vocabulary:

For the context of this post, these words have the meaning mentioned here:

  • "Lock": "prevent access until further authentication is provided", to provide further authentication means to follow similar - if not the same - steps as those provided to users who forgot their password.

  • "Origin": the IP, user agent, or other techniques the server may use to identify the source of a connection. If used, it should be mentioned in the privacy policy that the server will log such information.

  • "Third channel": Email, SMS, dedicated app, or other medium of private communication outside of the control of the server.

It should also be noted that under this definition the retry time is not a lock because it doesn't require additional authentication from the user but waiting instead.

And, because it can't be said enough times, hash and salt your passwords.

CAPTCHA

It should be noted that not all CAPTCHA solutions are visual. Some are auditory, and even others are textual (for example: "How many colors in the list purple, penguin, blue, white and red?").

Pros

CAPTCHA is easy to implement using third party solutions. Using third party solution also externalizes the problem to make a CAPTCHA strong enough.

Cons

Using a CAPTCHA may become an inconvenience for legitimate users that may be having problems typing the password. Current reCAPTCHA mitigates this problem by using behavior analytics to identify human users.

A robot may solve CAPTCHA by clever AI, or simply by asking the attacker to solve it.

Retry time

Pros

Retry time have an advantage in that it buys time. So, it can be combined with a notification on a third channel to alert the owner of the account.

What action the user may take? You can suggest using a stronger password, but that won't really solve the problem.

As an alternative, consider giving the user the option to deny access from the attacker machine (that is to lock the combination of origin and account)※. See "Lock the origin".

※: It should require authentication, and only affect the current account. Care should be put in avoiding any defect that may lead to an account locking another account.

Cons

Using a retry time reduces the usability of the service, as it becomes an inconvenience for legitimate users that may be having problems typing the password. This is worst than CAPTCHA as it is cognitive downtime.

Brute-force/dictionary attacks are still viable if the attacker performs an attempt once each hour or so. Alternatives to deal with this problem include security policies to change the password frequently (which the user may render ineffective by choosing similar passwords) and IDS or other analytics to detect attackers (which could be circumvented by distributing the attack from multiple sources - hopefully that is expensive enough to be a deterrent itself).

Lock the account

Pros

It is resilient against spreading an attack over time or multiple origins.

Cons

Locking the account may lead to a legitimate user being locked out of the account because of the amount of failed attempts.

Also, failed attempts by an attacker in a third location will lock out the legitimate user. Combining origin lock with account lock would allow more granular control. In this case, the account would be locked only for the origin from where access is being attempted.

Attacks may still affect the system by causing locked out legitimate users to contact support or to find an alternative service.

Lock the origin

Pros

Locking an origin, independently of the account has the advantage of allowing to stop attackers instead of punishing accounts.

Cons

In would require the server to track the origin of requests and distinguish failed from successful attempt.

The origin of an attack may be shared between many users (For example in Internet cafés), and locking an origin may mean to lock out legitimate users.

Combining origin lock with account lock would allow more granular control. In this case, at first the origin would be locked only for the account it is trying to access, yet an origin that is locked for many accounts can be locked globally.

Two-factor authentication

All variants of two-factor authentication are strong brute-force/dictionary deterrents. There are two main variants:

  • Send a code via a third channel to allow authentication. It shouldn't require additional measures to prevent brute-force of that code, because it is meant to be single use and short lived.

  • Require a code from dedicated hardware/software key for authentication. The key must provide a single use code that authorizes the authentication.

Pros

Two-factor authentication is the only solution that can actually make brute-force/dictionary attack ineffective. That is accomplished by requiring a single use code, which being single use won't be guessed by attempting multiple times.

Cons

Two-factor authentication is often more expensive to implement.

What to use?

It makes sense to add additional protection to deter brute-force/dictionary attacks. The need for these measures is increased in systems where the password space is too small※, or if the minimal strength of the passwords is too low (for example the four digits pins common in banking).

※: It is good to put an upper cap to the size of the password. This way the server will not be chocked while making an expensive hash on the password. And you should use an expensive hash because it will deter bure force attacks against stolen hash codes.

CAPTCHA should be the first option, as it is very easy and cheap to implement (using stablished solutions such as reCAPTCHA).

Between retry time and locks, consider that the minimal viable implementation is similar: to lock an account you add a field to the account object/record marking it as locked, and then check that on authentication... to put a retry time, you do the same thing, except what you store is the time at which authentication is valid again.


It makes sense to mitigate the inconvenience by adding these measures once a few attempts have failed. If so, apply CAPTCHA first as it doesn’t create cognitive downtime for the user.


Between the lock options, we have seen that combining origin and account is a better alternative (but also more complex) than either one alone. The implementation will require logs and analytics.

Finally, two-factor authentications have benefits that surpass the above solutions. Yet it is the most expensive to implement as it requires connection to a third party service (email server, SMS service, dedicated app, dedicated hardware, etc.).

I would suggest to implement logging and analytics and based on them decide if you want to implement locking or if you want to implement two-factor authentication.

How many attempts?

There will be:

  • n1 attempts until catpcha appears.
  • n2 attempts until retry time appears.
  • n3 attempts until lock is applied.

Note: if you use two-factor authentication, you use it from the first attempt.

The values for this variables can be tweaked in the future based on your analytics. Yet, for reasonable defaults, consider:

  • n1 should be an estimate of the number of attempts a person may do if they have problems typing the password. 2 attemps would be the minimun n1 because that accounts for the basic caps error. Note: gmail allows me 20 attempts before using CAPTCHA.

  • n2 should be an estimate of the number of attempts a person would do before going to access recovery mechanism. There is no hard minimun, in fact it can be applied as soon as you apply CAPTCHA and have increasing time intervals to wait. In my opinion n2 = 3 * n1 is good starting point.

  • n3 should be an estimate of the number of attempts at which it is more probable an attack is being made. Consider that CATPCHA and retry time should deter any manual attack, so n3 need not to be much higher. In my opinion, n3 = 2 * n2 is a good starting point.

Note about retry time: The interval the user must wait can be increased on each attempt. This allows you use a very small initial interval (for example 1 second) and build up from there until a hard cap (for example 1 day).

Note about counting attempts: You should avoid an overflow in the attempts count. If you are storing the number of attempts in the account object/record, handle the overflow. If you are doing a query on logs to get the number of failed attempts from the last successful one, consider adding a time interval (that will also cap the query time).

Theraot
  • 254
  • 1
  • 5
  • "It is good to put an upper cap to the size of the password. This way the server will not be chocked while making an expensive hash on the password." This may or may not be true. bcrypt itself has an input key (password) length of 72 bytes (with a recommendation to only use 56). If you calculate the SHA512 hash of the actual password and feed that to bcrypt, the additional cost of a long password is negligible. – Martin Bonner supports Monica Oct 11 '16 at 07:23
  • Another con for "lock the origin" is that any really significant brute force attempt is likely to come from a botnet, making this impractical. – mattdm Oct 11 '16 at 10:40
  • @MartinBonner [Django had that problem](https://www.djangoproject.com/weblog/2013/sep/15/security/) – Theraot Nov 19 '16 at 04:08
  • Good answer! You may want to add to 'cons' for CAPTCHAs that they often pose a privacy concern: Google's CAPTCHA seems to be most frequently used, and they encourage people to let them be tracked around the web by making the CAPTCHAs nigh impossible for those who don't (I know my privacy settings are working because I always get the most difficult kind of ReCAPTCHA, and sometimes get "please try again" indefinitely, no matter how many I solve correctly). The user also has to agree to Google's privacy policy and TOS to use ReCAPTCHA, which is a lot of legalese and invasive to privacy. – Luc Aug 01 '19 at 08:40
1

What I generally use is a X in Y amount of time formula. 3 tries a second is a lockout, but 3 tries in 60 seconds is fine. 60 tries in 60 seconds is bad, but 60 tries in a day is fine.

A lot will depend on what your trying to protect. Also, if there are any external rules that need following like company policy, HIPAA, etc etc.

The general idea is that it should make the process take long enough that the "bad guy" just gives up and moves on. That being said it's important to not tie attempts to a login, but to an IP and IP range as well.

Also a lot depends on your "restore access" process. Lets say you run an API that allows customers to get shipping status. If locked out, they have to call support and verify. In this case I would probably allow something like 120 attempts every two minuets and something like 400 attempts in a 24 hour period. That may seem high, but with the "restore" policy, your customer business could be down for hours or days, if one of their scripts goes haywire.

The general idea is to stop or lengthen brute force attacks, but not to get in the way of normal users, even when normal users are using bad credentials.

coteyr
  • 1,506
  • 8
  • 12
  • Can you elaborate on tying attempts to IP and IP ranges? – mattdm Oct 12 '16 at 21:00
  • As part of a solution to the total problem you may want to block an IP that tries 700 login attempts per min. This helps with DOS, but also means that someone would fail trying to login to 700 account with the same password. For example attempting random user names with the password "qwerty1!" if tied to user only would let all 700 attempts succeed. Ting it to a IP means that only a few attempts will succeed. Doing similar to a range of IP addresses helps with botnets. – coteyr Oct 12 '16 at 21:06
  • It's very easy to get a list of usernames or emails for a website. Normally, it's public knowledge, but even when it's not, other related mailing lists are. For example Company A uses Jira. The Jira user list is private, but the company directory is not. So you could try all the people in the company directory, with the top 10 most popular passwords, and maybe get lucky. – coteyr Oct 12 '16 at 21:09
-2

I like the 3 limit (per hour. and perhaps 6-9 failures per day, 12-18 per fortnight), but duplicate attempts with the same passcode (in 3 group) should not be counted. If you have actually forgotten your password then you probably do not log in that often, therefore it can wait. Better than having somebody given too many attempts per day to guess it.

But under this, if you happen to be already locked out, you the user will know someone has been trying to guess it, and so we need a strategy on how to deal with that - especially when you want to log in.

  • This doesn't take denial of service attacks into account. – UTF-8 Oct 11 '16 at 11:06
  • _If you have actually forgotten your password then you probably do not log in that often, therefore it can wait._ does not necessary follow. Maybe it's something that is very important and needs to be done only once a year, and I forgot about it until the last minute. – mattdm Oct 11 '16 at 14:57