4

I'm trying to limit requests with nginx based on response status code. I'd like to slow down requests with too many 4xx or 5xx responses. I have this code block in my config file:

map $status $bad_guy {
    ~^[23]  "";
    default $binary_remote_addr;
}
limit_req_zone "$bad_guy" zone=badguy:10m rate=1r/s;

server {
    limit_req zone=badguy burst=20;

It seems that the above config block all IP addresses sending more than 1 rps, including those with only 200 OK responses.

Could you help me please? Why do the above config not work? Do I have to use something else (maybe openresty?) to achieve this? Thank you.

Viet Pham
  • 41
  • 3

1 Answers1

3

This is quite tricky because the $status variable is empty when declaring the limit_req_zone. The $status is only known after nginx has processed the request. For example after a proxy_pass directive.

The closest I could get to achieve rate limiting by status is doing the following:

...
...
...
limit_req_zone $binary_remote_addr zone=api:10m rate=5r/s;
...
...
...
server {
    location /mylocation {
        proxy_intercept_errors on;
        proxy_pass http://example.org;
        error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 426 428 429 431 451 500 501 502 503 504 505 506 507 508 510 511 @custom_error;
    }

    location @custom_error {
        limit_req zone=api burst=5 nodelay;
        return <some_error_code>;
    }

}
...

The drawback is that this way you must return a different status code then the proxy pass response.

finrod
  • 131
  • 3
  • Doesn't this rate limits the custom error page instead of the URL that we try to protect: `/mylocation`? In such case request gets routed to the back-end, it's just rate limited user would not see the response. Would adding the `limit_req` clause to the `location /mylocation` block help? – NeverEndingQueue Jun 02 '21 at 12:53