6

My server is getting hit with a variety if requests like the following:

Started GET "/key/values"
ActionController::RoutingError (No route matches [GET] "/key/values")

Started GET "/loaded"
ActionController::RoutingError (No route matches [GET] "/loaded")

Started GET "/top/left"
ActionController::RoutingError (No route matches [GET] "/top/left")

How should I defend against such attacks? Will these requests slow down my site even if they do not get a response?

MicFin
  • 201
  • 2
  • 6
  • 3
    What attack are you worried about here? Could you provide more details, I just see a routing table. I think most attackers are more interested in code exec and info disclosure than DoS. – rook Jul 28 '15 at 06:32
  • 1
    I can't quite see an attack here. Someone requests information that isn't available, so you return a 404 status. Unless there is a password required to get to your site at all, in which case you return a 401. That's just normal behaviour that you should take in your stride. – gnasher729 Jul 28 '15 at 12:31

3 Answers3

10

How can I defend against malicious GET requests?

These requests do not look really malicious. At least based on your description they don't cause any harm, i.e. no unwanted code execution, SQL injection or similar attacks. They only need some resources to process. What you see is what every operator of a web server can see in the log files: lots of requests which don't match anything existing on the server because somebody is scanning the internet and looking for vulnerable systems. Additionally these requests can also be caused by changes to your site where formerly valid URL's are now invalid but robots still remember the old URL's and check these for updates.

If these requests bother you then you can either change your application to ignore such requests (and not log them as errors) or filter these requests by a server/reverse proxy in front of your application.

While the suggested web application firewall (WAF) can filter such requests I would consider it overkill to install one just to filter these harmless requests. But properly used (i.e. adapted to the application instead of just installed and forgotten like often the case) it can be a useful layer of protection. And while the suggested fail2ban can help too it is only useful if these requests originate only from a few single IP addresses, which is often not the case. Moreover a reliance on fail2ban can lock out valid users since invalid links to your site can be included in mails or other websites and thus the originating IP address of the request is not the attacker but an innocent user which got tricked in visiting this link. But fail2ban only sees the originating IP and the error message and then locks out the innocent user.

What you should do is make sure that your application is not vulnerable and OWASP is a good resource to learn about securing web applications. If your application is secure than it has no problem dealing with these kind of requests. But to limit resource usage from invalid requests you should make sure that these requests get detected and rejected as early as possible in your application, i.e. it would be bad if invalid requests would also trigger costly queries against a database.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 7
    Seriously, if _these_ requests caused any harm, then you need new server developers. – gnasher729 Jul 28 '15 at 12:32
  • Thanks! No harm was done, I'm just new to maintaining a server and the requests seemed possibly threatening since some attempted to retrieve keys and such – MicFin Jul 28 '15 at 14:28
  • Overflowing logs causing old logs to be discarded and/or a full disk from log trash may be considered harmful. – Arc Jul 28 '15 at 18:06
5

In conjunction with what @SakamakiIzayoi suggested:

Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs -- too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).

Source.

3

The easiest defense solution would be to install a Web Application Firewall.

You can find in-depth descriptions regarding them on OWASP and Wikipedia.

I doubt the requests would slow down your site. Attackers would most likely request existing items as it would be far more effective in wasting your web-server's resources.