1

I'm trying to implement a throttling feature on nginx, that is shared across multiple servers across multiple datacenters. I would like to know what would be the best practice for building this.

For example, let's say that I have an HTTP API running on two cluster of servers (behind a load balancer) located in two different datacenters. I would like to throttle a developer by his api-key to 1000 requests/hour. The developer has built a mobile application, which means that depending where his final users are, requests will be server by both locations (the closest datacenter).

How would you enforce throttling in this particular scenario?

Mark
  • 442
  • 5
  • 12
  • Would the throttling not be better suited to being handled by the actual application/backend? Remember you're dealing with a web server, your not implementing complex logic in it. – jduncanator Apr 13 '14 at 08:31

1 Answers1

0

The easiest way would be to implement throttling in each of N data centers separately. In your case M=1000 requests/hour and N=2 data centers. So, just use M/N=500 as your throttle value.

See: NGINX - throttle requests to prevent abuse

dmourati
  • 24,720
  • 2
  • 40
  • 69