20

I have a relatively new 8-core box running CentOS. I would like to develop a stats server that uses TCP. It's very simple, it accepts a TCP connection, increments a counter and closes the connection. The catch is it needs to do this at at least 10k requests a second. I'm suspecting CPU/Memory won't be a problem, but I'm more concerned about artificial limits (like half-open connections) that I might need to configure on my server to allow for this kind of volume. So, is this possible? Which settings should I be aware of? Will my NIC not be able to handle it?

5 Answers5

18

This is commonly known as the c10k problem. That page has lots of good info on the problems you will run into.

Greg Hewgill
  • 6,749
  • 3
  • 29
  • 26
  • yea, good link! – sybreon Sep 29 '09 at 09:03
  • 1
    I would expect to see more/different problems than those mentioned on the c10k page. Establishing and closing 10k connections per second is different from having 10k open connections. Connections staying in the TIME_WAIT state would be one, hitting the backlog limit for a listening socket might be another. And I wouldn't be surprised if that use-case hasn't received as much profiling/optimisation in the kernel code than the more common 10k open connections case. – cmeerw Sep 29 '09 at 20:17
3

you should be able to do it [ although that's probably bad idea ].

on resin appserv i can get ~5k req/sec on quad core 2.6ghz xeon. requests invoke simple servlet that reads 1 row from mysql and sends very small xml response.

test was done with

ab -n 10000 -c 16 http://some/url/

test results:

Concurrency Level:      16
Time taken for tests:   1.904 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      3190000 bytes
HTML transferred:       1850000 bytes
Requests per second:    5252.96 [#/sec] (mean)
Time per request:       3.046 [ms] (mean)
Time per request:       0.190 [ms] (mean, across all concurrent requests)
Transfer rate:          1636.42 [Kbytes/sec] received

but i think you'll be much better off using simple c program, surely without spawning new threads for each request. link from Greg Hewgill should give you good idea about it.

even during prolonged test i dont get any problems with connectivity [ mentioned half-opened sockets ]; test runs between two linux boxes connected over gigabit ethernet [ although as you see bandwidth is not a bottleneck ].

pQd
  • 29,561
  • 5
  • 64
  • 106
  • Are your connections closed after every response like the OPs? Is ab sending Connection:close header? – Nate May 20 '12 at 22:26
  • 1
    @Nate it's http 1.0 - single connection for every single http request. – pQd May 21 '12 at 06:11
1

You may be interested in a Linux kernel limit I hit while load testing Apache. In my case, the kernel produced some useful error messages so my advice is write your program and if you seem to be hitting a limit, pay attention to the kernel logs.

Ben Williams
  • 2,318
  • 4
  • 21
  • 17
0

I would use UDP instead of TCP if possible. It should be more lightweight and therefore scale better.

0

Your nic should be able to handle it, but I question the design of having 10k new TCP connections per second; if you're creating / destroying connections that quickly, then you should either a) keep them open for longer or b) use UDP instead.

In the case where you've 1M clients which need to do a query from time to time, but where load will hit 10k per second, UDP is probably a better choice.

In the case where you've only got 10k clients which need to do a query every second, they could just hold existing connections open and reuse them. This would be far kinder to the OS and also produce a lot less latency as it wouldn't require a new handshake each time.

In the case where you have 10k requests per second, I imagine you have a front-end load balancer anyway, so you'll need to test that too.

(NB: I think this belonged on Stack Overflow)

MarkR
  • 2,898
  • 16
  • 13