I have a download stream over TCP in an application (running on Win2k12).
The problem is that the connection gets closed by the sender because it times out.
I used wireshark to see what happens on 2 different servers (on one server it works ok, on the other it gets timed out).
I have noticed the same behavior on both:
When the download starts, everything seems ok, window size is 64k and remains the same for some time, segments get acknowledged. Then at some point the window size starts to decrease until it is 0. (As far as I know this is ok, the receiver cannot keep up with the sender.) However, there is no ACK or Window update message from the receiver until the entire buffer is read by the app, then a Window update advertises 64k window size again. Then it starts over again. Window size decreases until zero.
This does not seem right for me. As the application reads from the buffer it should have free space in it and a Window update should be sent, so the sender can send the next segment.
The other thing I don't understand is the behavior on the failing server. This server advertises larger and larger window sizes in every such cycle, in the last cycle before the timeout the window size was ~800 000. The timeout occures because the buffer is not emptied quick enough. But I have no clue why is the window size increasing on this server? Is there a setting on the server to prevent this?
Are my assumptions right, or I misunderstood something about the TCP protocol? Any ideas to solve this issue is appreciated.
Thanks.