2

I have 2 processes communicating over TCP/IP as following: process A sends 20 KB of data to process B and after some calculations process B sends the response (2KB) back to A.

When I run both processes in the same computer 100 times in a loop (EDITED: every time I wait for the response from the previous iteration before sending a new task), the total execution time is 15 seconds. When I run them on different machines connected with 1 Gbps network, the total execution time increases to 30 secs. So I assume that 15 secs are required for the communication over network.

I would like to know if this is a reasonable communication time (15 secs / 100 times = 0.15 sec to send task and get response) for the specified network throughput and the amount of data to send. If it can be faster (what I tend to think), where should I look at (firewall, routing, etc.)?

OS used is Windows 7 Ultimate, if it is relevant.

AdelNick
  • 121
  • 3

2 Answers2

2

The actual throughput of the network shouldn't be a factor here, as you won't even come close to saturating a 1Gbps link.

Now that your application must now travel down through layers 1-6 of the OSI model, and then back up to the application layer at the receiving end, each step along the way will be adding a very small amount of latency to the connection.

However, that being said - 150 milliseconds is quite a while for such a small amount of data (20KB burst with 2KB return), so unless your network is congested, or has some strict QoS, then there's no reason that the addition of the network should cause it to take that long. You can test this yourself if you like - send an ICMP packet 20KB in size, and see how long it takes to get a response. (ping X.X.X.X -l 20480).

But to be honest, there's so many other factors at play here, and running 100 instances of your application at once is only making things more complicated. How long does it actually take when you run a single instance?

blacklight
  • 1,369
  • 1
  • 10
  • 19
  • 100 instances are run synchronously, not in parallel. So every new instance is sent only when the response for the previous one has been received. I run 100 to make the measurement more accurate. – AdelNick Jan 09 '14 at 13:33
  • However I got the point that I could hope for a better performance. I'm not sure what to look further at, but at least it's worth it. – AdelNick Jan 09 '14 at 13:36
  • Gotcha, in that case then yes, 100 iterations is fine. I would start by checking your base network latency using some large ICMP packets (as mentioned above) - if their response time is < 20ms, then your application's latency is probably not being caused by the network. However if it also takes 150ms to send the large ICMP, then you will at least know the network is causing the increase in time, and not something else. – blacklight Jan 09 '14 at 13:44
1

It doesn't feel right that this is a network capacity problem (unless of course something else is eating the bandwidth). When all is said and done your total data transfer for the 100 transactions is 2.2MB which is nothing really.

  • Check network usage by other systems - I doubt this is an issue.
  • Does the remote system process the data as fast as the local.
user9517
  • 114,104
  • 20
  • 206
  • 289
  • Yes, the environment for process B is always the same. I only change the machine where process A is running for both experiments. – AdelNick Jan 09 '14 at 13:38