I don't know much about networking sorry. I'm building a client / server where the server streams a lot of data to the client and I'm trying to maximise throughput (through TCP).
If I run both client and server I get pretty high throughput. However, when going over the network card (the 2 machines are located on a LAN), the performance drops a lot. But the thing is, the bandwidth is not fully utilized, neither are the CPU cores.
I was wondering if you think it could be the large number of hardware interrupt (several thousands per second) that could cause the resources to be underutilized. The client performs some processing on the incoming stream so I would like to fully utilise the CPU and/or bandwidth.
How would you go about diagnosing and improving that situation? Does the size of the messages sent by the server impact the number of interrupts?