2

(To work around the "is a duplicate" issue: I don't see many requests. The number is rather small. Instead, each request downloads a lot of data.)

The server I'm talking about has 2x10 GBit/sec of Internet connectivity, with a backend of 40 GBit/sec. It serves around 20 TByte of data to the public, using nginx/vsftpd/rsyncd on a Debian Stable system. In addition, apache2 is used to serve some non-static content, but this can be disregarded.

The hardware is beefy enough to serve up to around 18 GBit/sec (as observed once), and traffic is free. As the server is a mirror of open source software and other public software, there's also not an issue of downtime being a critical problem.

However, I observe a specific pattern of DDoS attack I'd like to stop affecting the server. Whenever the attack is ongoing, most of the DVD ISOs of Debian (around 300 GByte, so way more than what fits in RAM) are downloaded by multiple hosts, with downloads per file repeating. Depending on how organized the attack is, this causes the bandwidth to increase quite a lot, and of course puts some stress on the hardware, while limiting the experience for legitimate users of the server at the same time.

In these attacks, typically 2-3 networks are coordinated in the attack, each downloading files as described. Most of the times it seems one click hosters or file caches of some sort are abused, tricked into downloading the same file over and over - and this being automated to download a number of different files as part of the attack.

Is there any way I can configure nginx to auto-ban certain IP ranges? Or limit traffic rates to, say, 1 GBit/sec for these networks (for some time)?

I don't want to impose a general limit, as the server actually should be used, even for high-speed transfers (mirror to mirror, most likely).

As a remark, a clever attacker, whatever the motivation might be, could start to abuse FTP/RSYNC instead of HTTP, working around the solutions this question might produce.

Currently, when I realize an DDoS attacks is going on, I scan the log files, identify the abusing networks, and ban them manually.

C-Otto
  • 294
  • 5
  • 16
  • 3
    Fail2ban software is designed to do things like this. However, you may need to code your own rules inside it to accomplish your goal. – Tero Kilkanen Aug 28 '16 at 21:46
  • 1
    Possible duplicate of [I am under DDoS. What can I do?](http://serverfault.com/questions/531941/i-am-under-ddos-what-can-i-do) – user9517 Aug 30 '16 at 08:21
  • There's a good article by the server fault team about using HaProxy to limit bandwidth of an abusive requestor. You could always set it so that the first download is at full speed, and then after that, the subsequent ones are capped at X speed. https://blog.serverfault.com/2010/08/26/1016491873/ – Brennen Smith Mar 01 '18 at 20:30

2 Answers2

4

Actually you can use Nginx limit_req module and also Nginx limit_conn

Both modules are able to limit the connections from a specific source and also to limit requests made from IPs, and this in your case may be very helpfull

As per the reuqest, nginx can also be used to limit bandwidth.

location ^~ /videos/ {
...
limit_rate_after 100m;
limit_rate 150k;
...
}

in this example limit_rate_after 100m; nginx will (per each user connection, be aware of this) throttle connection to a max of 150k. So eg if you need to allow up to 100m of full bandwidth and then restrict speed, this can help you.

Be aware that this solution limits nginx download speed per connection, so, if one user opens multiple video files, it will be able to download 150k x the number of times he connected to the video files. If you need to set a limit to the connections, you should be able to do it with limit_zone and limit_conn directives. Example:

Inside your server block configuration:

limit_rate 128K; limit_zone one $binary_remote_addr 10m;

Inside your location block configuration:

limit_conn one 10;

In this example, it would allow 10 connections per IP with 1 Mbit each.

Credits

x86fantini
  • 302
  • 1
  • 3
  • 9
  • I knew about these before, but how do I found out which IPs to put there? If I still have the manual step, I don't see an advantage over blocking straight away. – C-Otto Aug 29 '16 at 06:31
  • Have you read the docs for the modules? DDoS is sending mas vbsive data from one and multiple ips. With limit req you limit per ip addresses (populated automatically by nginx) – x86fantini Aug 29 '16 at 16:35
  • The DDoS as described in the question is not done using a massive number of requests, but instead by a tiny number of requests which each request a huge file (and download it to completion). – C-Otto Aug 29 '16 at 17:32
  • i have edited my answer to better suite your need. is it ok now? – x86fantini Aug 30 '16 at 08:15
  • Sadly not. I don't think that any solution just looking at a single IP or transaction would help. In fact, this would cause harm with the regular users. – C-Otto Aug 30 '16 at 18:26
0

You could use fail2ban with configuration scanning nginx access logs. There are lots of guides around the web that help with this.

It's really hard to block this kinds of attacks without specialized software/hardware, that do pattern recognition. You could probably use some kind of blacklist, although I'm not sure there might be one that fits your use case.

Another solution would be to use some kind of a javascript wall, that blocks downloading unless the browser uses javascript, but that's a bad practice and blocks legit users using curl/wget.

Gothrek
  • 512
  • 2
  • 7