I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections.
I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients.
There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees.
But... How do you do less hand-waving and you know, measure such a breaking point?
We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)?
I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput".
Help!
Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately?
[1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/