Yes, of course it is possible in NGINX!
What you could do is implement the following DFA:
Implement rate limiting, based on $http_referer
, possibly using some regex through a map
to normalise the values. When the limit is exceeded, an internal error page is raised, which you can catch through an error_page
handler as per a related question, going to a new internal location as an internal redirect (not visible to the client).
In the above location for exceeded limits, you perform an alert request, letting external logic perform the notification; this request is subsequently cached, ensuring you will only get 1 unique request per a given time window.
Catch the HTTP Status code of the prior request (by returning a status code ≥ 300 and using proxy_intercept_errors on
, or, alternatively, use the not-built-by-default auth_request
or add_after_body
to make a "free" subrequest), and complete the original request as if the prior step wasn't involved. Note that we need to enable recursive error_page
handling for this to work.
Here's my PoC and an MVP, also at https://github.com/cnst/StackOverflow.cnst.nginx.conf/blob/master/sf.432636.detecting-slashdot-effect-in-nginx.conf:
limit_req_zone $http_referer zone=slash:10m rate=1r/m; # XXX: how many req/minute?
server {
listen 2636;
location / {
limit_req zone=slash nodelay;
#limit_req_status 429; #nginx 1.3.15
#error_page 429 = @dot;
error_page 503 = @dot;
proxy_pass http://localhost:2635;
# an outright `return 200` has a higher precedence over the limit
}
recursive_error_pages on;
location @dot {
proxy_pass http://127.0.0.1:2637/?ref=$http_referer;
# if you don't have `resolver`, no URI modification is allowed:
#proxy_pass http://localhost:2637;
proxy_intercept_errors on;
error_page 429 = @slash;
}
location @slash {
# XXX: placeholder for your content:
return 200 "$uri: we're too fast!\n";
}
}
server {
listen 2635;
# XXX: placeholder for your content:
return 200 "$uri: going steady\n";
}
proxy_cache_path /tmp/nginx/slashdotted inactive=1h
max_size=64m keys_zone=slashdotted:10m;
server {
# we need to flip the 200 status into the one >=300, so that
# we can then catch it through proxy_intercept_errors above
listen 2637;
error_page 429 @/.;
return 429;
location @/. {
proxy_cache slashdotted;
proxy_cache_valid 200 60s; # XXX: how often to get notifications?
proxy_pass http://localhost:2638;
}
}
server {
# IRL this would be an actual script, or
# a proxy_pass redirect to an HTTP to SMS or SMTP gateway
listen 2638;
return 200 authorities_alerted\n;
}
Note that this works as expected:
% sh -c 'rm /tmp/slashdotted.nginx/*; mkdir /tmp/slashdotted.nginx; nginx -s reload; for i in 1 2 3; do curl -H "Referer: test" localhost:2636; sleep 2; done; tail /var/log/nginx/access.log'
/: going steady
/: we're too fast!
/: we're too fast!
127.0.0.1 - - [26/Aug/2017:02:05:49 +0200] "GET / HTTP/1.1" 200 16 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:49 +0200] "GET / HTTP/1.0" 200 16 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:51 +0200] "GET / HTTP/1.1" 200 19 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:51 +0200] "GET /?ref=test HTTP/1.0" 200 20 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:51 +0200] "GET /?ref=test HTTP/1.0" 429 20 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:53 +0200] "GET / HTTP/1.1" 200 19 "test" "curl/7.26.0"
127.0.0.1 - - [26/Aug/2017:02:05:53 +0200] "GET /?ref=test HTTP/1.0" 429 20 "test" "curl/7.26.0"
%
You can see that the first request results in one front-end and one backend hit, as expected (I had to add a dummy backend to the location that has limit_req
, because a return 200
would take precedence over the limits, a real backend isn't necessary for the rest of the handling).
The second request is above the limit, so, we send the alert (getting 200
), and cache it, returning 429
(this is necessary due to the aforementioned limitation that requests below 300 cannot be caught), which is subsequently caught by the front-end, which is free now free to do whatever it wants.
The third request is still exceeding the limit, but we've already sent the alert, so, no new alert gets sent.
Done! Don't forget to fork it on GitHub!