24

I was wondering if reCAPTCHA were strong enough to prevent BruteForce from bots or if I needed to add more security, such as sending a unique mail to the user every 5 tries that someone try to log on the account and block the account while the mail isn't checked.

The aim is to prevent automated password guessing on a website.

Gilles 'SO- stop being evil'
  • 50,912
  • 13
  • 120
  • 179
JohnnyBgud
  • 419
  • 1
  • 4
  • 8
  • Some related info (slightly different scenario) here: http://security.stackexchange.com/questions/18668/captcha-or-email-confirmation?s=3|1.5248 – R15 Dec 14 '15 at 12:48
  • Is your question about preventing automated password guessing on a website? – R15 Dec 14 '15 at 12:49
  • Yes that's the point i don't want bots beeing able to brute-force a password – JohnnyBgud Dec 14 '15 at 13:12
  • 1
    I would prefer not to lock down the account and only be unlockable by email-access: it happens too often that you use access to your Email-account because you forget the password to your email-account, it was only a throwaway email-account, the email-provider stops business, ... . Not only once I was asked by somebody to help him retrieve his email-account because the ex-(boy/girl)friend changed the password of their facebook-/... account and they needed the email-account for the reset-link and they sometimes didnt even know their email-address anymore. – H. Idden Dec 14 '15 at 15:09
  • I had 2 email-addresses where the provider went out of business or deleted the email-account because of inactivity. – H. Idden Dec 14 '15 at 15:09
  • I agree on that point but on another hand if you lock the account for let's say 10 minutes it might be a pain for the user that just could take a quick look, while he could unlock it instantly with the mail technic. I guess i have to fish for stats to see what is the most convenient – JohnnyBgud Dec 14 '15 at 15:11
  • Keep in mind, although recaptchas may help with security they're horrible for User Experience. If you do implement one I would still wait until the 5 try threshold is met or some other way to allow normal users not to have to deal with it. – DasBeasto Dec 14 '15 at 16:04
  • Why not just rate-limit login failures by IP address? Something like one login per second (or growing quadratically, up to some cap). Spoofing an address is hard if TLS is involved (hard, to say the least), and even assuming that a thousand students in a campus dormitory are NATted onto the same externally visible address, they do not all try to log in repeatedly over and over again, failing each time. Even with a botnet of a thousand machines, it takes _forever_ to brute-force a single (not on the top-10 worst password list) password if you can only try once per 15 seconds from each bot. – Damon Dec 15 '15 at 13:23

7 Answers7

20

ReCaptcha is great from a client side point of view, but it's not perfect.

The mail technique that you mention is called account lockdown, and is a very effective deterrent against brute force attacks. I would implement it because it's an added layer independent of the client side completely.

Another measure you can implement is throttling. It's unrealistic for a human to send 2 requests within 1 second (or any value you consider appropriate), so you limit the frequency of accepted connections. In iptables you would do something like this:

iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --set

iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 -j DROP

(Example taken from debian-administration.org.) This will limit requests to port 22 to 3/minute, any other request will be dropped. This can be useful for other operations besides log-in routines.

alexwlchan
  • 177
  • 11
Purefan
  • 3,560
  • 19
  • 26
  • 1
    So in your opinion i should stack multiple defense strategies... I really like that idea, because after i read some documentations there are no perfect defense and the best securities are the one with multiple technics! Are there any downside to throttling? i didn't find one on the few docs i read but maybe i didn't search the right way.. – JohnnyBgud Dec 14 '15 at 13:16
  • 2
    Yes, my suggestion is to use multiple defences. As for downsides when implementing throttling the ones that happen come from a faulty implementation, for example the more you tighten the control the less usable it may become, in my opinion 1 request per second is usually sensible enough but its really up to the implementator and the specifics of the project. – Purefan Dec 14 '15 at 13:44
  • 1
    Allright then, i check your answer, but if other people want to post something else don't hesitate i will take any tip on this subject! – JohnnyBgud Dec 14 '15 at 13:54
  • 1
    @JohnnyBgud one downside to throttling (as illustrated in this answer) is that it would prevent multiple users to connect from the same IP in the same second. Which can be a problem, or not, depending on the situation. – njzk2 Dec 14 '15 at 19:04
  • 1
    I generally prefer application-level ratelimiting to TCP throttling, especially if your server is using h2 - you can send an unlimited number of concurrent requests over one TCP connection. – Riking Dec 15 '15 at 00:54
  • Rate-limiting connections to port 22 is a terrible idea if some users transfer files via scp. E.g. WinSCP may create a new request for each file, leading to many request if multiple files are copied. Fail2ban is the better option here. – tarleb Dec 15 '15 at 12:55
  • @tarleb: that is an example to illustrate the suggestion, it is expected that the person implementing it adapts it to their needs. I make no claim that iptables is better than ufw or fail2ban – Purefan Dec 15 '15 at 13:50
  • @Purefan You are right in that I was too specific in my comment. What I mean is that rate limiting one the connection level is dangerous, because one doesn't know which connections are legitimate but just clustered together. Accessing a webpage containing many small images would be another example where many request originating from a single host could lead to the user being blocked. – tarleb Dec 15 '15 at 14:01
10

A CAPTCHA is normally intended to ensure that 'user' input is from a real person.

While it could help to prevent automated attacks against a website login mechanism it is likely to negatively impact on the user experience (username, password and CAPTCHA) unless the system can be configured to only enable the CAPTCHA after one or two failed logins.

The alternative mechanism for controlling attacks without compromising on user experience is an access control policy on the website which includes a limit on the number of unsuccessful login attempts during a given period of time, followed by a lockout duration. This will help to defeat both automated and targeted (i.e. user-based password guessing attacks as well as automated attacks) so in effect you get 2 for 1.

Whether to impose a timed lockout (i.e. an automatic reset after say 10 minutes) or whether to lock the account and wait for the valid user to respond in some way is a judgment call based on the particular scenario i.e. things like data sensitivity, size of user population, number of help desk staff available etc.

Notifying the user of multiple failed logins could help to make the user aware of potential attempts of unauthorised access, but could also result in lots of unwelcome support calls, so set the bar at a level which works for you.

R15
  • 2,923
  • 1
  • 11
  • 21
  • I guess you're right, but as you said it can results in a lot of negative feedbacks, this is why i wanted to find an alternative solution – JohnnyBgud Dec 14 '15 at 13:18
  • Not if the threshold is reasonable (10 failed) and use a finite lockout duration that is not very long (say 10 minutes). If someone cannot remember their password after 10 goes they are probably going to request a password reset so won't really be effected by the lockout anyway. If it is a malicious attempt more than likely it will not happen at the same time as the user is attempting to login. Ultimately though your call. I'd be more annoyed with a CAPCHA every time I login. – R15 Dec 14 '15 at 13:23
  • 2
    CAPTCHA isn't needed every login. Just after 1 or 2 failed logins. – Neil Smithline Dec 14 '15 at 15:18
  • @NeilSmithline True I wasn't thinking laterally enough when answering - I (erroneously) read an inference in the question that the CAPTCHA would be used for every attempt. – R15 Dec 14 '15 at 15:22
5

reCAPTCHA certainly makes password guessing harder, but not impossible. Hackers set up sites with goodies (games, downloads, etc.), which are captcha-protected, and will redirect your captcha there. Users trying to get their download will solve your captchas, enabling the hackers to use it to guess another password. Account lockdown is more effective, since only the owner of the e-mail can unlock the account. It's also more disruptive for users.

Keep in mind that both captcha and account lockdown are terrible user experiences, and your site should be pretty good / unique to be able to afford such techniques. Give your users at least a couple of attempts before you ask to solve a captcha for the first time, otherwise users may simply move to the next site offering the same services as yours.

Dmitry Grigoryev
  • 10,072
  • 1
  • 26
  • 56
  • You might want to add that there are services which any decent programmer can use, e.g. 2captcha.com (and their rivals which they so kindly give links to in their FAQ!) that will do something similar, but by actually filtering out spammy "prankster" humans. – wizzwizz4 Dec 14 '15 at 16:54
2

Another means of additional security is 2-factor authentication. This is where users register their cell (mobile) phone number and whenever they logon the server sends a code via SMS message which must be entered by the user before gaining access. As with all things security, this adds a layer of annoyance for the user but is extremely effective.

A much simpler means of security is to restrict short/easy passwords. If your user is allowed to make "1234" their password then a bot is much more likely to guess that than "ihatelongpasswordssomuch"

  • 1
    Often 2-factor is configured to only occur for login to a new IP, or if no login has occurred on that IP for a while. That way, it's less annoying. – wizzwizz4 Dec 14 '15 at 16:55
  • @wizzwizz4 I think more often cookies are used rather than IPs but yes it need not be used for every login – Dean MacGregor Dec 14 '15 at 16:57
  • As someone who deletes cookies except for a few whitelisted sites, I can confirm that cookies are more common and more annoying. I can't do a data export on my Google account because they want me to "use a browser you regularly use". Well, guess whom I block from tracking me around the web by deleting their cookies... (I guess I should try a GDPR data request in writing instead.) – Luc Aug 01 '19 at 08:30
2

reCaptcha is quite good but is not perfect.

A tool like fail2ban is another effective way. Configure the web site to log failures and then configure fail2ban to monitor the log and when it detects x number of failures from an IP address in a set amount of time, it adds a firewall rule so the server goes offline to that ip address for a pre-determined amount of time.

Advantages:

  • Don't have to pay for bandwidth (although probably small)
  • No server resources of the web site and database being consumed as ip address keeps trying
  • Blocks any other hack attempts that may be happening from that ip address
  • Use it for other different hack attempts

Disadvantages:

  • If you muck up yourself, could be a challenge having your ip blocked for a period, but this can be worked around by adding your ip's to a whitelist.
  • instances where ip's are shared by multiple users. This may disadvantage a few users, but if it's a large place like a campus or corporate network, their systems will hopefully be detecting something too. It's a hack attempt.
Trevor
  • 121
  • 3
  • What about instances where IPs are shared by multiple users? – schroeder Dec 15 '15 at 04:17
  • @schroeder good point. Added it to disadvantages. If it's a small home site of shared ip, the chance a legitimate user is sharing the same ip is probably zero. If it's a large corporate, they have a problem with a hacker on their network. It's something that a web owner would have to weigh up on a case by case basis. – Trevor Dec 15 '15 at 04:23
1

I have seen sites that use a CAPTCHA after a "low" number of incorrect password attempts (such as 3) and then lock the account for a period of time (such as an hour) after a "high" number of incorrect password attempts (such as 10). Effectively this means that brute force attacks are limited to a maximum of 10 guesses per hour, performed by a human operator - not very enticing from an attacker's point of view.

Micheal Johnson
  • 1,746
  • 1
  • 10
  • 14
1

Many answers seem to imply reCAPTCHA verifies that the input is human... it does not. There's plenty open-source OCR utilities (both client- and server-sided) that allow any automated script to parse text off an image. So anything text based, even calculations, can easily be spoofed.

OCR does add a lot of time for each guess, and also requires much more resources (computation power), but in the end it only delays the attacker. It should therefore never be used as the only defensive mechanism, only of course one is better than none. If possible, use the new image-based verifications (select all images that show a flower). Input-based mechanisms (as Google is currently testing) are also easily spoofed by analysing your hardware correctly.

I would also advise to keep a history of IPs, user-agents, anything identifiable when an incorrect password has been entered for any user. If I were to brute-force a user I would alternate between many users, using many different proxies and using a user-agent spoofer to make it seem random, but with enough information everything can be detected and blocked.

  • The OCR capable of breaking reCAPTCHA is pretty state of the art (within last 3 years, for academic demonstration). reCAPTCHA is harder to OCR than normal text, it normally includes some form of obscuring, distortion etc. Google was (is?) sourcing its CAPTCHA's from its book scanning project -- words which the OCR hard low confidence for were given to humans to solve via the CAPTCHA. – Frames Catherine White Dec 15 '15 at 08:59