3

I'm relatively new to security and I'm looking to prevent brute force logins on a web application I'm creating.

After doing some research, I decided to go with requiring a captcha after a user attempts too many logins within a set amount of time but I haven't found much documentation on best practices for this but instead a bunch of simple examples and open source packages (which I'm reluctant to use without a deeper understanding of how and why they work).

I already have sessions set up so I was thinking of including the IP address in the session token itself (all of this is encrypted) and cross-checking to ensure that only a certain number of login attempts are made within a certain amount of time before requiring a captcha. My thoughts were that this would have a bonus of preventing session stealing since it would be possible to verify that the IP address of the sender matches the IP on the session.

Would this approach work or does it set off red flags?

Mike
  • 133
  • 1
  • 5
  • "open source packages (which I'm reluctant to use because I don't want to rely on a black box my security" Open Source is the opposite of a black box. – Goose Jun 02 '17 at 19:45
  • @Goose point taken. – Mike Jun 02 '17 at 19:47
  • 2
    Open source or not, you're still right to be skeptical about applying drop-in security solutions you don't fully understand! – Ivan Jun 02 '17 at 19:59
  • 1
    IP address stickiness is not a reliable for devices that are mobile with multiple radios. If your service is going to support mobile applications then consider the case when devices roam from using Wifi to Cellular data, the IP address will change but you do not want to require uses to login again or require captcha. A better approach is to use Browser fingerprinting or device fingerprinting. This SE topic on how Google reCaptcha works might add some context https://security.stackexchange.com/questions/124532/what-triggers-googles-recaptcha – ARau Jun 03 '17 at 02:33
  • I don't think one needs to overthink this. No matter how many session cookies, ip address tabulars or browser fingerprinting you got there will always be ways to go around theese. Just set a local throttle per IP-Adress and a global throttle which is so high it won't be reached unter normal operation. When any of these two is being stepped over require a Google reChaptra for login. They know what they do. Plus it works and is easily set up. – BlueWizard Jun 04 '17 at 22:15

2 Answers2

4

I see a few things to think about:

  1. You'd need to make very sure that a login attempt is only valid when the client sends you a valid session cookie. Otherwise, the attacker can just not send a session cookie and you'll happily generate a fresh one for him on every login attempt. That seems to be a hard problem to solve. How do you force an attacker to send you a cookie if he doesn't want to?

  2. Why do you assume that login attempts will all come from the same IP address? What happens if you have an an attack launched from multiple IPs?

  3. What happens if an attacker doesn't try to attack a single user, but tries a whole range of usernames and passwords? This means you won't detect an attack in progress, because there won't be multiple login attempts for any user in rapid succession.

IMO, you can't get around keeping some persistent server-side state about the global frequency of requests to your login endpoint (or in fact to any endpoint), irrespective of who the client seems to be and whether you get a session cookie or not.

Out of Band
  • 9,150
  • 1
  • 21
  • 30
  • 1
    After doing a bit more digging I agree now with your bottom paragraph. The only work around I could find for the issues you raised was to implement some form of back end persistence. After studying some of the open source packages available I realized that they had much more robust implementations then what I'd be able to do for now, so I'm integrating one into my back end. Thanks again for your time! The answer was quite helpful. – Mike Jun 03 '17 at 22:28
  • And that's exactly the reason why open source is really cool. It's mature software which many have already scrunizized. – BlueWizard Jun 04 '17 at 22:16
1

Open source packages (stable, reputed ones that are shipped with mainstream distros) are a much smaller risk than open source advice ;) There are mountains of bad advice out there. Oh BTW, this advice is quite legit! ;)

So healthy dose of skepticism (not paranoia) is good for security and you're off to a good start.

In general: 1. You need multiple protections to work together. Don't look at just one threat in isolation, though examine at the end whether you are covering all threats at least to some extent. 2. Complete prevention in all circumstances is great, but most often, raising the barriers high to make it uneconomical for non-targeting attackers is sufficient.

More specifically:

  1. In general, I hate captchas as a user. So I don't use them in systems I develop. If I must endure one, a simple barrier (see principle 2 above) such as what CloudFlare uses is what I prefer. Last checked it's a combination of server-side analysis (how bot requests generally come at different frequencies than humans), some JS to detect browser characteristics (bots general stop with UA string spoofing), a small sub-second injected delay to slow down bots but not even noticeable by humans, etc.
  2. You did mention that you'd use a captcha when you detect repeated failed login attempts. That's a good way to do it - and indicates that you are using server side checks.
  3. More server-side checks are possible (including IP address checks, if you like) though not recommended due to roaming/dynamic IP address issues mentioned by @blownie55 and @Pascal. You don't need to store IP address in cookies. You can always get it from the request itself and verify it against stored properties against the session.
  4. In some cases, we "temporarily block attacking IP addresses" when we detect a brute force attack in progress. Anything from 30min to 1day should work, depending on the nature of attackers in your asset domain. Riskier domains in general need longer blocking; high-volume usage domains need shorter blocking; risky high-volume domains need a philospher to step in. :)

Hope this helps.

Sas3
  • 2,638
  • 9
  • 20