I have been experimenting with the TCP parameters in Linux (with a 3.5 kernel). Basically concerning this connection:
Server: Gigabit uplink in datacenter, actual bandwidth (due to sharing uplinks) is around 70 MB/s when tested from another datacenter.
Client: Gigabit local lan connected to 200mbit fiber. Fetching a test-file actually achieves 20 MB/s.
Latency: About 50ms roundtrip.
The remote server is used as a fileserver for files in the range of 10 to 100mb. I noticed that using an initcwnd of 10 the transfer-time for these files is heavily affected by TCP slow-start, taking 3.5 seconds to load 10mb (top speed reached: 3.3 MB/s) because it starts slow and then ramps up however it is finished before the maximum speed is reached. My goal is to tune for minimum load-times of those files (so not highest raw throughput or lowest roundtrip latency, I'm willing to sacrifice both if that decreases the actual time it takes to load a file)
So I tried a simple calculation to determine what the ideal initcwnd should be, ignoring any other connections and possible impact on others. The bandwidth-delay-product is 200 Mbit/s * 50ms = 10 Mbit or 1.310.720 bytes. Considering that the initcwnd is set in units of MSS and assuming the MSS is around 1400 bytes this will require a setting of: 1.310.720 / 1400 = 936
This value is very far from the default (10*MSS in Linux, 64kb in Windows), so it doesn't feel like a good idea to set it like this. What are the expected downsides of configuring it like this? E.g:
- Will it affect other users of the same network?
- Could it create unacceptable congestion for other connections?
- Flood router-buffers somewhere on the path?
- Increase the impact of small amounts of packet-loss?