3

I have a download stream over TCP in an application (running on Win2k12).
The problem is that the connection gets closed by the sender because it times out.

I used wireshark to see what happens on 2 different servers (on one server it works ok, on the other it gets timed out). I have noticed the same behavior on both:
When the download starts, everything seems ok, window size is 64k and remains the same for some time, segments get acknowledged. Then at some point the window size starts to decrease until it is 0. (As far as I know this is ok, the receiver cannot keep up with the sender.) However, there is no ACK or Window update message from the receiver until the entire buffer is read by the app, then a Window update advertises 64k window size again. Then it starts over again. Window size decreases until zero.
This does not seem right for me. As the application reads from the buffer it should have free space in it and a Window update should be sent, so the sender can send the next segment.

The other thing I don't understand is the behavior on the failing server. This server advertises larger and larger window sizes in every such cycle, in the last cycle before the timeout the window size was ~800 000. The timeout occures because the buffer is not emptied quick enough. But I have no clue why is the window size increasing on this server? Is there a setting on the server to prevent this?

Are my assumptions right, or I misunderstood something about the TCP protocol? Any ideas to solve this issue is appreciated.

Thanks.

Andras Toth
  • 131
  • 1
  • 3

1 Answers1

1

If the receiving process is not processing data as fast as they can be transferred over the network, the window is supposed to get smaller as packets are received until the receive buffer is full, and the window is 0. The server is still supposed to ACK the received data in this situation, such that the sender knows not to retransmit it.

Once the window has gone to 0, the receiving end should not advertise that there is room in the window for more data immediately upon the application reading another byte from the stream. It should at least wait until there is free space enough to match one MTU sized packet. Waiting much longer than that is not a good idea.

Dynamically resizing the memory allocation during transfer is sensible behavior. However the algorithm should aim at converging on a size that is large enough to not cause a bottleneck, but it should not be much larger than that. Fluctuation the way you describe it, should not be happening. And if the receiving application cannot keep up with the data arriving, then the window size should not be increased.

The sender should not time out the connection without sending a few keep-alive packets first. If the sender time out the connection without sending keep-alive packets, then I'd say there is a bug on the sender. If the sender does send keep-alive packets, but the receiver does not respond to them, then I'd say there is a bug on the receiver.

Did you inspect the communication from each end of the connection to ensure that there isn't any significant packet drop causing the time out?

kasperd
  • 29,894
  • 16
  • 72
  • 122