2

Say I have some Apache logs that show brute force attempts on a login page. I've singled out the IP, and found out who the culprit was. How can I show to a third party that I didn't makeup the entries in those logs?

Is there a way to systematically prove through some means of hashing, or system verification that the logs aren't edited in any way?

The scenario I gave above is purely hypothetical. But was wondering if you could verify logs like that somehow.

TACO
  • 23
  • 3

5 Answers5

2

No, you cannot prove to someone else, that logs on your system have not been tampered with. Clearly, if you own that given system, you can do whatever you like with it, including the manipulation of all files.

In the end it's a question of trust. Therefore that third party would have to define, what they would require to trust your data. A possible solution for them could be to deploy a third party system between your server and the internet, so that they can observe the network traffic themselves.

There are however tools, that regularly check file integrity, but those usually aim for more or less static file contents like config files and are probably not applicable for log files. Still, if they would do the job, they would still run on your system and do not change the scope of trust, the third party would have to accept.

2

There should be trust between the administrators, but the same question is asked every time a system gets compromised: if an attacker gets root access to the system, the log files could have been tampered, too.

  1. Have a separate log server that only absorbs all the logs, but doesn't allow the logging system to alter or read the logs afterwards. Or a security information and event management (SIEM) with even more functionality, like detecting anomalies. This approach ensures the logs from before the compromise can be trusted.

  2. Audit your configuration files by a trusted independent person and take checksums. If the configuration changes, audit it again. This would prevent decreasing log levels, possibly causing phase 1 to fail. Checksum mismatch would be an extra source for alerts, too.

Esa Jokinen
  • 16,100
  • 5
  • 50
  • 55
  • What if your separate server log was "snaps-hotted" every second. The snapshot would reside on a another server, and the hash would be a key that could only be password unlocked with that key. Of course the SNAP-SHOTTED OS would have to be a lightweight distro. – TACO Feb 12 '19 at 19:45
2

Disclaimer: I Am Not A Lawyer.

Despite of this, I know a general rule: nobody can be a proof for themselves. That means that if the log was generated in your system you cannot use it a definitive proof. You always need a third party to be involved for it, for example to sign something. For example, you could case the log file at specific times (say once a day) and use a third party service to provide trusted timestamping.

That would still not prove that the log file is exact, but that it has not been modified since it has received its timestamp.

Unsure whether it applies to your use case, but this is the only way I can imagine...

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
0

Similar to Esa Jokinen's points, you can further maximize proof that these logs are not being messed around with through ensuring log centralization is set up - such as forwarding logs out to a separate log server or a system (ie a SIEM) that absorbs the logs, processes the logs, using a log collector, etc.

In addition, you can do file integrity monitoring on certain paths such as calculating the checksum of these files and on subsequent scans (on set intervals) re-calculate for any changes to the checksums of these files. And then send the checksum results out to log centralization server. This log collector has a module for file integrity monitoring, also there is LogRhytm and various posts out there around "top X log monitoring and log collection tools". You'd want a suite that can do log collection (to do log centralization) not just log monitoring..

Threatpost also recently in February 2019 published a post about data manipulation attacks (here) and some ideas to mitigate them like file integrity monitoring.

NASAhorse
  • 310
  • 1
  • 7
-3

Sure on VPSes it logs what commands were entered. Everytime you login via ssh press the up key. That was a recent command entered. So if you check which commands have been enter before after the attack you can show that you did not use any type of editing software vim nano etc. to edit those logs. This is of course assuming you're running linux on that vps.

J Doe
  • 43
  • 2
  • 1
    yea, but those logs (history) are easily editable as well. Even moreso, `history -c` just clears it out. – TACO Feb 01 '19 at 06:11
  • 1
    I mean other than using forensic programs to pull up memory and deleted stuff won't even prove it. Because even then you'll have some files or logs that wont be able to be entirely recovered. The only way to fully prove all logs wasn't touched is to have something logging everything to a file which you couldn't have touched or don't have permission to. – J Doe Feb 01 '19 at 06:21
  • Also you can contact their ISP and let them know about it, and ask them for the logs because you wish to step forth with legal action. That's really the only way to prove it. – J Doe Feb 01 '19 at 06:23
  • thought maybe there would be a way to have the system automatically sign and hash a file or something. – TACO Feb 01 '19 at 06:56
  • 2
    @TACO Considering that Apache is fully open source, what stops you from modifying Apache to fake logs? – vidarlo Feb 01 '19 at 08:10
  • This type of evidence can only prove a positive, not a negative. If a command is there, then you have support for the theory that the command was run. If a command is not there, it proves nothing. – schroeder Feb 01 '19 at 10:01