I am looking for a way to flexibly manage outbound HTTP/HTTPS traffic in a way that respect site policies, and could be deployed at the "edge" of our datacenter network.
For example, we use several Web APIs that have throttling rates like "no more than 4 requests per second" or "max 50K requests per day", etc. We have many people at the company that use various services like these, so I cannot centrally manage all requests in software. People run these things at different schedules and at different intensities. We are fine with that (it meets internal needs), but we realize that - in aggregate - we may get into situations where we generate so much concurrent traffic, that we get blocked by a site. (although it's unintentional)
What I am expecting/hoping is that we can leverage bandwidth management / traffic shaping solutions that already exist in the network hardware world and that we could subsequently deploy such a thing at the edge of our datacenter network.
Ideally, I could then write L4 or L7 routing rules that allow us to ensure that no more than - for example - 4 req/sec outbound are generated by our datacenter. The rest of the requests would, again ideally, be queued by the hardware for some reasonable length of time, with queue capacity excess simply being refused. I realize there's no free lunch and that throttling is not going to solve a fundamental inherent demand (requests) vs. supply (site policies) problem. However, the throttling would allow us to "smooth out" requests over some window, say, a day, so that we could utilize an external service in a properly restrained manner, yet maximize our use.
Does anyone know of a network-level bandwidth management solution like this? If so, would it also support rules based not only on something like the URL in an HTTP request, but also some additional HTTP headers?