I am running simple tcp client and server application on 2 linux hosts (2.6.x kernel, rhel 6.3 enterprise linux. In an infinite loop, the client sends a message of 1024 bytes and the server responds with 100 byte ack. Then the client sends another 1024 bytes message and so on. The latency (RTT) as determined by ping between 2 hosts average around .23 ms.
I am observing that normally the client and server are only sending 3200 messages per second, but after running for 2-3 minutes, i would see message rate hit as high as 5100 messages per second. This rate will exist for few seconds and then fall back 3200. How can i figure out as to what causes these jumps in throughput?
UPDATE: The two hosts are on the same VLAN, connected by Cisco catalyst switch and network bandwidth is 1Gb/sec.