2

I'm currently reading the Blue Team Handbook and one of the recommended methods to detect rootkits is to compare technical data obtained from various sources. As I understood it this is practiced like this:

  1. Compare a list of all Files on a HDD obtained through the system with a list of all files obtained using a live CD

  2. Compare a list of all connections shown by the system with a list of connections captured by a network tap.

  3. Compare a list of all processes shown by the system with a list of processes extracted from a memory image with a tool like volatility.

The lists should all match up of cause. If not the host might be compromised. As I have quit some expirience in scripting I'm able to do all these things but is this really a common and good method of detection?

It's quit costly and to be honest I don't know a single person who has ever done something ornate like that to detect a rootkit. It should be noted that I'm a adminsitrator and not a IR guy.

techraf
  • 9,141
  • 11
  • 44
  • 62
davidb
  • 4,285
  • 3
  • 19
  • 31

1 Answers1

2

It's a good method of detection, to repeat:

  • Memory check for hidden processes
  • Network traffic check
  • Filesystem check

The above solution is flexible, scalable and secure, however it's not your average scripting if large scale is involved, for which this is best suitable solution. But this doesn't mean it would not work on smaller scales, however effort might be too high. Possible implementation could be done in the following way:

  • Memory checks built-in Virtualization Host
  • Network traffic monitoring on the network layer (e.g. netflow)
  • Filesystem checks on the network storage layer

Now the problem is, since it's quite custom job, you won't be able to really rely on the results before sufficient time and money are spent on the development of such solution, so it works, but on large scale, e.g. a lot of servers, let's say, 10.000 of them or more, that would make sense. Also if these servers run various things and are not the same kind of server. However, there can be done some trade-offs, you might want to evaluate some software to do some of it, and then it may be easier to do, see below.

If you want to try by yourself, try setting up Netflow (sflow) monitoring and with just this, you can successfully detect malicious traffic when it happens. Using ready-made software would be more efficient. So for example, you can select patters for packets to raise alarm. A lot of unusual traffic can be successfully detected not only related to hacking and that helps a lot. It's cheap, easy, multi-purpose and effective. The only thing is you need to invent proper patters (e.g. which packets you want to capture), these could be basically anything unusual but permitted, e.g. outgoing tcp connections, which upon initial detection can be whitelisted if not malicious. Another easy way would be to run tcpdump, then netstat and run comparison. But this is less useful, it would only detect hidden network connections, and if you search internet, you might find some helpful scripts already.

Regarding filesystems, you can also use normal script to check for the consistency of the binaries (for example, using standard rpm or dpkg commands), and also check consistency of config files. Such script can be run from external host which connects over ssh, uploads the generated script (e.g. with latest checksums calculated from git repo were are configs), run the check and return result. Rundeck may be software to help you with this. Running command from external host assures you it is being run at all in first place, and it's simple and efficient method in the end.

Regarding RAM, this is the most difficult part, and you may skip this part for the moment because it's more advanced. Dumping and analyzing RAM is far from anything easy.

Also look for chkrootkit and rkhunter scripts and similar. There's free antivirus Clamav and paid solutions like Kaspersky. These do check for known backdoors, however, in average corporate / hosting environment there's 50 / 50 chance if it's custom patched sshd with secret key or a known backdoor.

And don't forget at assuring whatever kernel is not patched or there's no malicious module, but that's a different subject. I would try researching it and using same script to check both kernel and hidden processes. There might be some scripts doing that on the internet already.

In summary, Netflow (or Sflow, it's the very same thing) will help you a lot not only with security but with the network as well. It has nothing to do with servers, it works the way, that switch sends every n-th packet to the netflow receiver whre it is checked against predefined patterns and accounted for stats. It should provide you reports as well. Filesystem checks are easy to do with rpm and dpkg, and the rest can be checked from git or with build server. Note that package managers are using crystallographic signatures so you do not need to use LiveCD. Finally, check out the kernel and hidden processes and you are sorted. And before deploying software, scan it with "Kaspersky".

Take time, develop a plan, and slowly bit by bit it's not hard to make it all.

Aria
  • 2,706
  • 11
  • 19
  • I have done all of those things already but never in such a structured way. So it isn't that new to me on a technical level. I aimed to do this with hosts that I suspect to be compromised not on every host in a regular manner. Do you think this is necassary to do this on a scheduled basis? – davidb Jul 11 '16 at 14:54
  • I usually recommend putting in place both network monitoring (like SolarWinds, OpManager so it's full SNMP and Netflow, but with latest Nagios it can be done as well but more work) and set of scripts to routinely check for backdoors, consistency of binaries and config files. Running chkrootkit / rkhunter regularly is part of routine checks which are not suitable for Nagios etc but are good for Rundeck which is good engine and more reliable than crontab which can be disabled. – Aria Jul 11 '16 at 15:31
  • As Im working for a mid sized buissnes we don't have the budget to buy that much software. We are running nagios and we are doing continious network monitoring for hygene reasons with self crafted software that is utilizing nmap with xml output and packetfu for passiv and active monitoring. We are going to add OSSEC through OSSIM to the servers to monitor the modification of files. I'm able to export netflow data from our routers but we arent doing this yet. My main problem is the limited time. What do you think is the most important step of the ones mentioned above? – davidb Jul 11 '16 at 15:42