9

I have been experimenting with the TCP parameters in Linux (with a 3.5 kernel). Basically concerning this connection:

Server: Gigabit uplink in datacenter, actual bandwidth (due to sharing uplinks) is around 70 MB/s when tested from another datacenter.

Client: Gigabit local lan connected to 200mbit fiber. Fetching a test-file actually achieves 20 MB/s.

Latency: About 50ms roundtrip.

The remote server is used as a fileserver for files in the range of 10 to 100mb. I noticed that using an initcwnd of 10 the transfer-time for these files is heavily affected by TCP slow-start, taking 3.5 seconds to load 10mb (top speed reached: 3.3 MB/s) because it starts slow and then ramps up however it is finished before the maximum speed is reached. My goal is to tune for minimum load-times of those files (so not highest raw throughput or lowest roundtrip latency, I'm willing to sacrifice both if that decreases the actual time it takes to load a file)

So I tried a simple calculation to determine what the ideal initcwnd should be, ignoring any other connections and possible impact on others. The bandwidth-delay-product is 200 Mbit/s * 50ms = 10 Mbit or 1.310.720 bytes. Considering that the initcwnd is set in units of MSS and assuming the MSS is around 1400 bytes this will require a setting of: 1.310.720 / 1400 = 936

This value is very far from the default (10*MSS in Linux, 64kb in Windows), so it doesn't feel like a good idea to set it like this. What are the expected downsides of configuring it like this? E.g:

  • Will it affect other users of the same network?
  • Could it create unacceptable congestion for other connections?
  • Flood router-buffers somewhere on the path?
  • Increase the impact of small amounts of packet-loss?
Tomas
  • 193
  • 4
  • 1
    Can you confirm that you are talking megabytes/s when you say `70 MB/s` and not megabits? Just looking for clarification. – Andy Shinn Feb 12 '13 at 22:49
  • Yes, megabytes/s not megabits. – Tomas Feb 19 '13 at 08:54
  • If i were you, I would try multiplying it by 2 few times (10, 20, 40, 80,...) and see how it improves your typical download times – mvp Jul 14 '14 at 05:22

2 Answers2

1

What are the expected downsides of configuring it like this? E.g:

Will it affect other users of the same network?

Changing the initcwnd will affect:

  • Users of the server with the settings change
  • IF those users match the route the settings change is configured on.
Could it create unacceptable congestion for other connections?

Sure.

Flood router-buffers somewhere on the path?

Not irrelevant, but unless they are your routers, I'd focus on the issues that are closer to you.

Increase the impact of small amounts of packet-loss?

Sure, it can do this.

The upshot is that this will increase the cost of packet loss, both intentional and unintentional. Your server is simpler to DOS by anyone capable of completing the 3-way handshake (significant amounts of data out for low investment (amount of data) in).

It will also increase the chances that a bunch of those packets will need to be retransmitted because one of the first packets in the burst will get lost.

Slartibartfast
  • 3,265
  • 17
  • 16
  • Ok, so to summarize: For a private server with the initcwnd set only for the correct routes it is a good improvement for the interactivity for users. – Tomas Feb 24 '13 at 17:26
0

I don't think I fully understand what you're asking for so here's an attempt to respond:

First of all, what you're trying to do only makes sense on the sending side and not the receiving side. I.e. you need to be changing the file server and not the receiver. Assuming that's what you're doing:

Changing initcwnd to (e.g.) 10 means that 10 packets will go away immediately. If all of them reach their target you may end up with a much larger window in the first RTT because of the slow-start (the exponential cwnd increase). However, upon packet loss the cwnd will be halved and since you're bursting with 10 packets you'll have a considerable amount of retransmissions so you may end up with more problems than you think of.

If you want to try something more aggressive and be somehow "rude" to other Internet users than you can instead change the congestion control algorithm at the server side. Different algorithms handle cwnd in a different way. Keep in mind that this will affect all users unless your server-side software changes this per-connections (which I highly doubt). The benefit here is that the algorithm will be in effect even after packet loss while initcwnd won't play much role.

/proc/sys/net/ipv4/tcp_congestion_control is where you change the congestion control algorithm.

FWIW for such small RTTs (50ms) and for medium or large files the initcwnd shouldn't affect your average speed much. If there's no packet loss then (i.e. fat pipe) cwnd will be doubling at every RTT. With RTT=50ms on a fat pipe you'll fit 20 RTTs in the first second, meaning that with initcwnd=2 you'll end up with cwnd=2*2^20 after 1 second, which I bet is more than you can handle ;-)

V13
  • 231
  • 1
  • 5