1

I've got 2 windows 2012 servers that limit connection speed to anywhere from 1-3mBps and it seems to be related to tcp window scaling. Both servers have these settings

 TCP Global Parameters
 ----------------------------------------------
 Receive-Side Scaling State          : enabled
 Chimney Offload State               : disabled
 NetDMA State                        : disabled
 Direct Cache Access (DCA)           : disabled
 Receive Window Auto-Tuning Level    : normal
 Add-On Congestion Control Provider  : none
 ECN Capability                      : enabled
 RFC 1323 Timestamps                 : disabled
 Initial RTO                         : 3000
 Receive Segment Coalescing State    : disabled

And iperf result are

PS C:\iperf-3.1.3-win64> .\iperf3.exe -c x.x.x.x --port 27015 --verbose
iperf 3.1.3
CYGWIN_NT-6.2 ks4000721 2.5.1(0.297/5/3) 2016-04-21 22:14 x86_64
Time: Mon, 27 Aug 2018 07:13:21 GMT
Connecting to host x.x.x.x, port 27015
      Cookie: ks4000721.1535354000.903985.669c51a3
      TCP MSS: 0 (default)
[  4] local y.y.y.y port 53412 connected to x.x.x.x port 27015
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  2.00 MBytes  16.8 Mbits/sec
[  4]   1.00-2.00   sec  3.38 MBytes  28.3 Mbits/sec
[  4]   2.00-3.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   3.00-4.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   4.00-5.00   sec  3.38 MBytes  28.3 Mbits/sec
[  4]   5.00-6.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   6.00-7.00   sec  3.38 MBytes  28.3 Mbits/sec
[  4]   7.00-8.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   8.00-9.00   sec  2.88 MBytes  24.1 Mbits/sec
[  4]   9.00-10.00  sec  2.12 MBytes  17.8 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  31.1 MBytes  26.1 Mbits/sec                  sender
[  4]   0.00-10.00  sec  31.1 MBytes  26.1 Mbits/sec                  receiver
CPU Utilization: local/sender 1.5% (0.6%u/0.9%s), remote/receiver 0.3% (0.2%u/0.2%s)

iperf Done.

If i set the window manually in iperf i can max out the server's port speed.

PS C:\iperf-3.1.3-win64> .\iperf3.exe -c x.x.x.x --port 27015 --verbose --window 1
6000000
iperf 3.1.3
CYGWIN_NT-6.2 ks4000721 2.5.1(0.297/5/3) 2016-04-21 22:14 x86_64
Time: Mon, 27 Aug 2018 07:18:56 GMT
Connecting to host 70.189.80.34, port 27015
      Cookie: ks4000721.1535354336.472314.35587933
      TCP MSS: 0 (default)
[  4] local y.y.y.y port 53585 connected to x.x.x.x port 27015
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  18.0 MBytes   151 Mbits/sec
[  4]   1.00-2.00   sec  11.1 MBytes  93.3 Mbits/sec
[  4]   2.00-3.00   sec  11.2 MBytes  94.4 Mbits/sec
[  4]   3.00-4.00   sec  7.62 MBytes  64.0 Mbits/sec
[  4]   4.00-5.00   sec  14.5 MBytes   122 Mbits/sec
[  4]   5.00-6.00   sec  11.0 MBytes  92.2 Mbits/sec
[  4]   6.00-7.00   sec  10.8 MBytes  90.2 Mbits/sec
[  4]   7.00-8.00   sec  11.0 MBytes  92.3 Mbits/sec
[  4]   8.00-9.00   sec  11.2 MBytes  94.4 Mbits/sec
[  4]   9.00-10.00  sec  11.0 MBytes  92.2 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   118 MBytes  98.6 Mbits/sec                  sender
[  4]   0.00-10.00  sec   104 MBytes  86.9 Mbits/sec                  receiver
CPU Utilization: local/sender 1.1% (0.6%u/0.4%s), remote/receiver 0.4% (0.2%u/0.3%s)

iperf Done.

Similar issue is with my other server which is 2012 r2 can max out my home pc's bandwidth by specifying window size manually.

PS C:\iperf> .\iperf3.exe -c x.x.x.x --port 27015 --verbose
iperf 3.1.3
CYGWIN_NT-6.3 S 2.5.1(0.297/5/3) 2016-04-21 22:14 x86_64
Time: Mon, 27 Aug 2018 07:23:02 GMT
Connecting to host x.x.x.x, port 27015
      Cookie: 
      TCP MSS: 0 (default)
[  4] local x.x.x.x port 51271 connected to x.x.x.x port 27015
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  2.25 MBytes  18.9 Mbits/sec
[  4]   1.00-2.00   sec  3.50 MBytes  29.3 Mbits/sec
[  4]   2.00-3.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   3.00-4.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   4.00-5.00   sec  3.38 MBytes  28.3 Mbits/sec
[  4]   5.00-6.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   6.00-7.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   7.00-8.00   sec  3.50 MBytes  29.4 Mbits/sec
[  4]   8.00-9.00   sec  3.50 MBytes  29.3 Mbits/sec
[  4]   9.00-10.00  sec  3.50 MBytes  29.4 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  33.6 MBytes  28.2 Mbits/sec                  sender
[  4]   0.00-10.00  sec  33.6 MBytes  28.2 Mbits/sec                  receiver
CPU Utilization: local/sender 0.9% (0.3%u/0.6%s), remote/receiver 0.1% (0.0%u/0.0%s)

iperf Done.


    PS C:\iperf> .\iperf3.exe -c x.x.x.x --port 27015 --verbose --window 409600000
iperf 3.1.3
CYGWIN_NT-6.3 S 2.5.1(0.297/5/3) 2016-04-21 22:14 x86_64
Time: Mon, 27 Aug 2018 07:23:30 GMT
Connecting to host x.x.x.x, port 27015
      Cookie: 
      TCP MSS: 0 (default)
[  4] local x.x.x.x port 51276 connected to x.x.x.x port 27015
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   392 MBytes  3.28 Gbits/sec
[  4]   1.00-2.00   sec  8.25 MBytes  69.3 Mbits/sec
[  4]   2.00-3.00   sec  23.2 MBytes   195 Mbits/sec
[  4]   3.00-4.00   sec  45.8 MBytes   384 Mbits/sec
[  4]   4.00-5.00   sec  35.9 MBytes   301 Mbits/sec
[  4]   5.00-6.00   sec  35.6 MBytes   299 Mbits/sec
[  4]   6.00-7.00   sec  35.2 MBytes   296 Mbits/sec
[  4]   7.00-8.00   sec  34.9 MBytes   292 Mbits/sec
[  4]   8.00-9.00   sec  34.9 MBytes   292 Mbits/sec
[  4]   9.00-10.00  sec  34.8 MBytes   291 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   681 MBytes   571 Mbits/sec                  sender
[  4]   0.00-10.00  sec   292 MBytes   245 Mbits/sec                  receiver
CPU Utilization: local/sender 2.6% (0.6%u/2.0%s), remote/receiver 8.3% (2.9%u/5.4%s)

iperf Done.

I've tried various tweaks and tools to fix this but to no avail. Only way around it is to use a download manager that will request multiple parts at once if i want to exceed the tcp window limit.

Manually setting receive/send window results in slightly higher speeds but the window only scales to 200kb according to wireshark http://ss13.moe/uploads/2018-08-30_15-13-50.txt

user3930869
  • 11
  • 1
  • 3
  • So you already tried tweaking Windows' TCP stack via the registry as described here: https://support.microsoft.com/en-us/help/224829/description-of-windows-2000-and-windows-server-2003-tcp-features, but didn't help? – bcs78 Aug 28 '18 at 10:19
  • Yes I set the tcpwindowsize registry entry the highest it would go and even set the receive/send window manually on all 3 windows machines manually per their maximum available bandwidth. Best it did was increase DL/UL to about 1.5-3.5MBps iperf log too long to post so uploaded here: http://ss13.moe/uploads/2018-08-30_15-13-50.txt I did run wireshark to watch the window scaling and it starts at 65535 and goes up to a maximum of 212952 if left to its own devices. – user3930869 Aug 30 '18 at 20:13
  • Feel free to edit your original question to add these adtional info. So the TCP window actualy scales but max out at 200KB. What about latency and/or possible packet losses? You didn't say anything about the network infrastructure. – bcs78 Aug 31 '18 at 06:43
  • What if you set the autotuninglevel from "normal" to "experimental"? `netsh int tcp set global autotuninglevel=experimental` And what happens when you disable "ECN capability"? `netsh int tcp set global ecncapability=disabled` – bcs78 Aug 31 '18 at 07:16
  • No change in throughput with either of those. No packet loss and latency is around 80-90ms on average. – user3930869 Sep 02 '18 at 04:05
  • That latency is quite high and most likely that's causing this performance issue. You have to find a way to decrease the latency as low as possible (down to 15ms or lower) to solve this issue. – bcs78 Sep 02 '18 at 06:25
  • How is latency the issue if I can run a linux distro off the same box and have no issues with throughput. – user3930869 Sep 02 '18 at 06:50
  • That's just another proof that Linux is better than Windows. :) I have similar differences. Iperfing "ping.online.net" (both with 50ms latency but not entirely the same network topology and not the same client hardware) results an average throughput of 27Mbit/s with windows and 88Mbit/s with linux. – bcs78 Sep 02 '18 at 08:38

0 Answers0