42

Should failed login attempts be logged? My doubt is that if there is a distributed brute force attack, it might exhaust the available disk space of the database. What is the best practice for this?

I'm protecting a public-facing web server with sensitive data.

Based on the answers so far, one other question that occurred to me is whether web server logs would be enough for logging such attempts. Would it be redundant to log them in the database?

D.W.
  • 98,420
  • 30
  • 267
  • 572
John L.
  • 741
  • 5
  • 8
  • Great question. Can you give more details about the type of service you're talking about? Is this a public-facing SSH server? Is this a corporate Windows domain? Physical access to a building? Also, what is the sensitivity of the data being protected (measured as a dollar value of loss / cleanup in the case of a breach)? – Mike Ounsworth Jan 04 '17 at 16:05
  • Related: [Centralizing syslogs as an easy way to improve your environment](https://utcc.utoronto.ca/~cks/space/blog/sysadmin/CentralizeSyslog), especially the parts on disk space. – user Jan 04 '17 at 21:27
  • 1
    Last year's SSH brute-force attacks produced less than 150 MB of compressed log files on my server. If you've got a sensible log-rotation plan, disk space isn't going to be an issue. – Mark Jan 05 '17 at 01:12
  • 1
    If you have follow-up questions, it's better to ask them separately in a separate post using the 'Ask Question' button in the upper-right. This site's format works best when you avoid having multiple questions in the same post. You can do that, and then edit it out of this post, and it might increase the likelihood that you receive a good answer to your follow-up question. – D.W. Jan 05 '17 at 02:26
  • 2
    But don't [log the password used in the failed attempt](https://security.stackexchange.com/questions/16824/is-it-common-practice-to-log-rejected-passwords)! – CodesInChaos Jan 05 '17 at 12:49
  • If there is a distributed brute force attack then you should have other tools (eg fail2ban) to prevent the majority of login attempts even reaching your system. – Qwerky Jan 05 '17 at 13:41

3 Answers3

51

Yes, failed login attempts should be logged:

  • You want to know when people are trying to get in
  • You want to understand why your accounts are getting locked out

It's also very important - older Windows logging process never emphasized this enough - to log successful login attempts as well. Because if you have a string of failed login attempts, you really really really should know if the last one was followed by a successful login.

Logs are relatively small. If there was enough login attempts that logging would cause a problem, then "not knowing about the attempts" is probably a worse-case problem than "found out about them when we ran out of disk."


A quick caveat - as @Polynomial points out, the password should not be logged (I seem to recall that 25 years ago some systems still did that). However, you also need to be aware that some legitimate login attempts will fail when people enter their password into the username field, so passwords do get logged. Doubt me? Trawl your logs for Windows Event ID 4768:

LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4768
EventType=0
Type=Information
ComputerName=dc.test.int
TaskCategory=Kerberos Authentication Service
OpCode=Info
RecordNumber=1175382241
Keywords=Audit Failure
Message=A Kerberos authentication ticket (TGT) was requested.

Account Information:
    Account Name:       gowenfawr-has-a-cool-password
    Supplied Realm Name:    TEST.INT
    User ID:            NULL SID

Correspondingly, you should limit access to these logs to the necessary people - don't just dump them into a SIEM that the whole company has read access to.


Update to address question edit:

Based on the answers so far, one other question that occurred to me is whether web server logs would be enough for logging such attempts. Would it be redundant to log them in the database?

Best practices are that logs should be forwarded to a separate log aggregator in any case - for example, consider PCI DSS 10.5.4. In practice, such an aggregator is usually a SIEM, and functions like a database rather than flat log files.

So, yes, it's "redundant" by definition, but it's the kind of redundancy that's a security feature, not an architectural mistake.

The advantages of logging them into a database include searching, correlation, and summation. For example, the following Splunk search:

source="/var/log/secure" | regex _raw="authentication failure;" | stats count by user,host

Will allow us to roll up authentication failures by user and host:

Failed logins as per Splunk search

Note that the ability to query discrete fields like 'user' and 'host' is dependent upon the SIEM picking logs apart and understanding what means what. The accessibility of those fields here is a side effect of Splunk automagically parsing the logs for me.

Given that your original question dealt with space constraints, it should be pointed out that any database or SIEM solution is going to take more disk space than flat text file logs. However, if you use such a solution, you'll almost always put it on a separate server for security and space management reasons. (There are even SIEM-in-the-cloud solutions now to make life easier for you!)

gowenfawr
  • 71,975
  • 17
  • 161
  • 198
  • 5
    "I seem to recall that 25 years ago some systems still did that" ...I'm sadly confident that anything bad that happened 25 years ago is still happening today. – jpmc26 Jan 05 '17 at 00:24
  • I always enjoy an answer that suggests trolling ( not 'trawling' ) as part of the solution ;) – a20 Jan 05 '17 at 01:17
  • @a20 those users who've had to deal with me after I reviewed 4768 logs can attest there's more troll than trawl under that bridge. – gowenfawr Jan 05 '17 at 01:55
  • Trawl the logs and troll the users eh? Looks good, carry on! – a20 Jan 05 '17 at 05:44
  • 1
    Or you regularly lock/standby your machine, then come in pre-coffee and hit ctrl-alt-del, type password, hit enter, then realise it had rebooted overnight. This is made more likely by the response to ctrl-alt-del being slow when the machine has just woken up. I would have thought they should have taken this into account designing the logging as it's really quite likely that this will leak passwords. – Chris H Jan 05 '17 at 09:54
  • @ChrisH, spot on, it's a security failure to have inconsistent UI which can lead to unplanned field entry. And when I say "inconsistent" I mean "sometimes one way, sometimes another, not obviously predictable to end users." – gowenfawr Jan 05 '17 at 14:42
  • 1
    @ThomasWeller thanks for pointing the edit out, I hadn't seen it, I've updated my answer to address that as well. – gowenfawr Jan 05 '17 at 16:45
9

As a complement to @gowenfawr's answer that explains why you should log those attempts, I would like to say that there are ways to ensure that logs will never exhaust your disks.

At least in the Unix-Linux world, tools like logrotate or rotatelogs allows to change the log file when its size goes beyond a certain threshold. They are commonly used with the apache server (rotatelogs comes from Apache foundation) or with the syslog system.

For example logrotate is used to rename a log file (in a ring of a number of copies, generally about 10 of them) eventually compress it, and warns the program generating the log to reopen its log file by sending it a dedicated signal or via any arbitrary command.

That way, if your server is under a DoS attack, the size of your log files will remain under control. Of course you will loose older events, but that is definitely better than crashing the server because of an exhausted disk partition.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
  • Would it be good to maintain two parallel `logrotate`d logs? One detailed [account, time, address, pass/reject,...] and the second one keeping track of number of failed attempts between two succesfull log-ins and number of succesfull logi-ins between two failed attempts [(account), event, counter]? – Crowley Jan 04 '17 at 21:48
  • Keep in mind, that in some linux systems, `logrotate` is triggered by cron, and may only run once a day. You may need to boost the interval to prevent your disks from filling up in a DDoS type scenario. It may also be a good idea to, rate-limit the failed login attempts by IP block, as well as by username – BobTuckerman Jan 05 '17 at 15:44
  • @BobTuckerman: You are right! The man pages advises to run it with a short delay (about 5 minutes) if it is used on a size base. – Serge Ballesta Jan 05 '17 at 15:47
5

It really depends on what value you think you could derive from the information. There's limited value in having pages of logs telling you that your server is under attack; it's internet facing and will likely be under various degrees of constant bombardment for it's lifetime.

Depending on the configuration of your server, it is quite possible to end up creating an availability issue because you've exhausted the available disk space with logs. It does happen. Gowenfawr was right to state that logs don't take up much space but this is why issues with disk space exhaustion can take years to pop up but they're a major pain when they do.

If you decide to log, then you need to design a log management strategy and consider some of the following:

  • What am I going to do with my logs? What happens after you establish someone is trying to gain access to your system?
  • Will my logs contain any potentially sensitive data? (Remember, real users can sometimes fat-finger their credentials). If the answer is yes, consider either sanitising logs before storage or encrypting them once at rest
  • Verbosity and Log Events
  • How often should I rotate logs? Log rotation is a pretty standard way of dealing with log sizes. It can allow you to compress old logs and potentially delete particularly old log archives
  • Security Information and Event Management. _You mentioned that your server will contain sensitive information, depending on what that is you might want to consider looking into SIEM products that can provide you with useful insights based on logging and other data such as alerting, dashboards, forensics etc.

Speaking personally, I tend to find logs only useful for forensic analysis - they help work out what happened after a successful breach. As Gowenfawr mentioned; logging successful attempts to log into a system are just as (probably more) important than the failed ones.

One last point, your login mechanism should be built such that the likelihood of a distributed brute force ever working is vanishingly small. You haven't given a lot of detail on what you've built but using strong backend algorithms, particularly computationally expensive hashes and introducing backoff timing into login attempts can greatly reduce the chances that an attacker will ever gain access in this way.

Rob C
  • 186
  • 3