I'm looking for a solution to rate-limit on an ip basis. How many packets can the hashlimit iptables module handle on a recent Intel x86_64 CPU core? 1.000/sec? 1.000.000/sec?
-
1What does your benchmarking say? – womble Jul 10 '12 at 09:46
-
See http://serverfault.com/a/384155/75118 Note that classifying packets in iptables is still much slower than using traffic control filters. But a whole bunch more convenient. – Matthew Ife Jul 10 '12 at 10:35
2 Answers
The most relevant additional machinery netfilter has to goes through, for what I see from the source, is hashing new entries, updating entry credits, looking up entries, and cleanup of the underlying hash tables you require (see /proc/net/ipt_hashlimit ).
Because hash tables are used, all those operations are constant time, and quite fast, except table cleanup. The latter is expensive if you have many requests from all different users.
If I have to make a rough estimate for the hashlimit overhead I would add max 15% to the cost of processing a standard rule set. As usual, the best way to tell is to measure. If you do, update this post :)
As a side note, you might want to check out the PF rate limiting option on BSD.
- 575
- 3
- 7
hash tables are generally efficient, and mostly have linear scalability..
.. although looking at the options, check more carefully around htable-gcinterval as it may have most impact on the performance. garbace collection of the hash table is probably most expensive operation. If I'd expect a bottleneck in hashlimit implementation, it would be around gc code.
- 149
- 5