0

My server's CPU has been spiking out for the past few days, to the point that it drops requests. I've been looking through the logs, and see certain IP's in the logs that are just simply downloading the content of my site, such as the .js, .css. image files, etc.. over and over again. The IP is usually based in China. I also see other requests from them trying to find files on the server that don't exist. Like

www.example.com/lawson-consultant-in-independence/

or

/lawson-developer-in-sterling-heights/

These addresses have nothing to do with our site.

I tried blocking the first IP, but it looks like its now coming through many different IPs. I think this is what's making my CPU spike.

My question is, how do I prevent this type of stuff from happening? How can I respond to keep my site running and available?

I was reading how there are certain ways to block people who are sending too many requests for a certain time period, but I don't want to accidentally block things like BingBot, GoogleBot and other spiders out there as well. What do people do in this case to prevent this type of attack? If it is an attack?

Joel Coel
  • 12,910
  • 13
  • 61
  • 99
Zee Tee
  • 199
  • 3
  • 10
  • 2
    Possible duplicate of How do I Deal With a Compromised Server: http://serverfault.com/questions/218005/how-do-i-deal-with-a-compromised-server – David W Apr 09 '13 at 11:20
  • Well its not compromised because it still works, its just real slow – Zee Tee Apr 09 '13 at 11:21
  • 1
    Have any of these IPs retrieved your `robots.txt` file? Does your `robots.txt` file prohibit these requests? What is the `Host` header in these requests? It could just be a legitimate spider following bad links or an old DNS entry from whatever had your IP address before you. – David Schwartz Apr 09 '13 at 11:24
  • @DavidSchwartz We've had this IP for over 10 years, I don't think thats it. I don't see that IP ever requested the robots.txt file. I don't know how to get the Host header of the requests. I'm looking in IIS logs – Zee Tee Apr 09 '13 at 11:25
  • You do not indicate the rate of requests. Noticing the file does not exist and sending back a 404 is very little work and the HTTP server should be able to do this thousands of times per second without slowing down the machine. – bortzmeyer Apr 14 '13 at 15:58

3 Answers3

4

If you don't need to serve content to those IPs, block them. Block the entire netblock, or the entire country if you don't want to serve content to China.

Alternatively, tools such as fail2ban can be used to block any IP that tries to visit a nonexistent page.

For Windows you have ts.block or Evan Anderson's tool described here: Does fail2ban do Windows?

Rory Alsop
  • 1,184
  • 11
  • 20
1

Like Rory said, Fail2ban is a great tool. A little "hot fix" i learn in college was adding the following rule to the Iptables until you can solve the problem in a real way:

-A INPUT -p tcp -m tcp --dport 80 -m recent --set --name HTTP --rsource
-A INPUT -p tcp -m tcp --dport 80 -m recent ! --rcheck --seconds 10 --hitcount 20 --name HTTP --rsource -j ACCEPT

Same thing can be done for port 443 by replacing --dport 80 with --dport 443.

What this does is simply block people that make more then X request in Y seconds. You can change this to fit your needs.

EDIT: failed to read.... IIS... sorry!

0

What you're gonna need is to make sure you log files are picking up every IIS logging option and then get yourself a good log file reader (the kind that can break down where long requests and/or frequent requests come from). As noted in another post, don't be afraid to ban entire IP ranges if they're from China, Russia, etc and you see similar IP range patterns. Web Log Expert is a favorite of mine for the reports it generates.

Techie Joe
  • 327
  • 2
  • 9