0

What could have been sent to my server to cause it to reboot?

Details:

I have a for-internal-use LAMP server running Ubuntu 10.04LTS (upgrade is scheduled for that nebulous "when I have time"). It runs several in-house scripts and monitors, and is my preferred gateway to remote in to the office. Over the last couple of months, the ssh and web attack attempts have been increasing at a scary pace, and two weeks ago the server began rebooting for no visible reason. At first it was once overnight, then overnight every night, until it finally escalated to every few hours.

I looked through all the system logs, which only show boot-up messages, then normal running messages, then boot-up messages again. I ran memtest and drive tests and CPU tests which all came back clean. So I turned my attention to the uninvited knockers-at-the-door.

I thought: there are NO legitimate reasons for anyone outside of the country to connect to this computer.

So I began grabbing an IP address of an obvious troll from the logs, using whois to pull up their host company, and banning that host company's entire range:

iptables -I INPUT -s 1.180.0.0/14 -j DROP

But this seemed slow, so I started looking for a better list. While looking, the server rebooted again. I found this rather quickly: http://nebulous.frikafrax.com/2013/323/chinanet-spam and spent a handful of minutes cobbling together a Perl script to dump the entire set of ranges into iptables.

No more random server rebooting.

It has been 3 days with no reboots. So now that my preamble is over here is the question:

What could have been sent to my server to cause it to reboot? The evidence more than suggests that the cause was not hardware but an attack effect, possibly intended, possibly unintended side-effect, but I would like to have more information on this attack and ways to detect and prevent it in the future.

Any thoughts or specific experiences are welcome.

EEAA
  • 108,414
  • 18
  • 172
  • 242
Alderin
  • 63
  • 1
  • 1
  • 8
  • Check here: http://serverfault.com/questions/218005/how-do-i-deal-with-a-compromised-server – TheCleaner Feb 11 '14 at 21:06
  • I remember reading that post a good while ago. Nice refresher, but all indicators are that there was no successful compromise. – Alderin Feb 12 '14 at 01:53

1 Answers1

0

I would like to have more information on this attack

Well, you're the one with the logs, so you'll need to do your own forensic work. It's not outside the realm of possibility that your server has been compromised, so do your due diligence in researching this. If you're not already running a resource monitor, install something like munin or sysstat at the very least, so you have a record of system resource usage levels.

and ways to detect and prevent it in the future.

Put the server behind a VPN. Problem solved. If this is not an option, you should ensure password auth is disabled (use key auth), and consider running sshd on a non-standard port. Using a non-standard port doesn't increase your security at all, but will surely reduce much of the noise in your logs as well as system load from having to deal with all of the authentication attempts.

EEAA
  • 108,414
  • 18
  • 172
  • 242
  • The frustrating part is that there isn't an indicator in the logs that I can find. I was at the console for one of the reboots, so I removed it from the net to take a look for the reboot cause. I saw nothing. This is also when I ran the RAM, CPU, and disk tests. I was hoping that someone would have said "an attack named _this_ could cause that with **this** kernel or **this** network driver version". Due to its positioning as my "fix the VPN when it dies" entry point, being behind a VPN is not available. I do use port 22, but other ssh/http/https security measures have been taken. – Alderin Feb 12 '14 at 01:45