2

I'm trying to achieve 4Gbps throughput between my computer and my Synology Nas. Unfortunately I am only getting 1Gbps speeds between these systems. My setup is below:

Synology DS1515+ with 4 NICS bonded:

enter image description here

Windows 10 Enterprise system with a 4 port Intel I350-T4 NIC running Intel's 22.1 Drivers (which I grabbed from here: https://downloadcenter.intel.com/download/25016/Intel-Network-Adapter-Driver-for-Windows-10?product=59063):

enter image description here

Dell Powerconnect 5324 switch utilizing LACP with two LAG groups - one for the Synology and the other for the PC:

enter image description here

I tested the setup by sending a large file (4.5gb) from the Synology to the PC (also tried from PC to Synology). I checked out the network utilization while doing this:

enter image description here

Notice the maximum throughput shown in the Task Manager and Resource Monitor is 1Gbps rather than 4Gbps.

How can I utilize the full 4Gbps?

NOTE: Speed is still capped even when transferring more than one file at a time.

enter image description here

Jaxian
  • 169
  • 3
  • 9
  • How did you set up your intel NIC team? Which configuration did you use? – Lenniey Apr 05 '17 at 07:29
  • Lenniey: I used IEEE 802.3ad Dynamic Link Aggregation. The choices are Adapter Fault tolerance, Adaptive Load Balancing, Static Link Aggregation, Switch Fault Tolerance, and the 802.3ad – Jaxian Apr 05 '17 at 07:32
  • 1
    Did you try SLA? I never had any problems using SLA or DLA, but I'd try that next. You maybe have to reconfigure your switch, though. – Lenniey Apr 05 '17 at 07:37

2 Answers2

2

How can I utilize the full 4Gbps?

Do more stuff. Transfer more files in more directions at once.

All the packets going in one direction for a single connection have the same MACs, VLANs, Ethertypes, source modules, port ID, and so on. So there's no way to distribute them over multiple physical links. Thus LAG/LACP limits them to the speed of the fastest link.

Alternatively, you could use something other than LAG/LACP, such as round robin. But that has very serious drawbacks and likely will be worse than using a single link.

David Schwartz
  • 31,215
  • 2
  • 53
  • 82
  • That isn't the issue (please see updated photo). Also, LACP is supposed to increase throughput by aggregating the physical devices into a single 'virtual' device using round robin. – Jaxian Apr 05 '17 at 07:30
  • 2
    This answer is wrong, 802.3ad specifically states to increase throughput. – Lenniey Apr 05 '17 at 07:39
  • 1
    @Lenniey It does increase throughput. But nevertheless, a single connection is limited to the speed of the fastest link. See, for example, [slide 3 of this presentation](http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf) in the IEEE's web site which says, "*All packets associated with a given “conversation” are transmitted on the same link to prevent mis-ordering*". – David Schwartz Apr 05 '17 at 07:49
  • 1
    @Jaxian One can theoretically use LACP with round robin distribution but most devices aren't capable of it and it's only done very rarely because it tends to perform very very badly. Devices are generally not designed to efficiently handle out-of-order packets at high speed and the cost of out-of-order processing (which generally cannot be parallelized) are higher than the benefits from using the links. Do you have any reason to believe you are actually doing round robin? I'm nearly certain the 5324 can't do it. (For LAG, not for multiple equal weight routes or QoS groups.) – David Schwartz Apr 05 '17 at 07:51
  • @DavidSchwartz I use SLA and DLA on some servers to bond 2-4 Gbps NICs to some 10 Gbps switches / backbone / whatever. All of them have near to 2-4Gbps throughput in single file transfer, I can not confirm what's in that slide. Maybe some drivers add enhanced single-conversion throughput or something, but yeah, I know what I see :) – Lenniey Apr 05 '17 at 07:53
  • @Lenniey I don't know what hardware you have, I only know what hardware the OP has and what the standards say. – David Schwartz Apr 05 '17 at 07:55
  • @DavidSchwartz you're right, maybe his hardware / drivers can't do what mine can, and you're absolutely right on the standard, too. Maybe, well, I definitely see it, some hardware / drivers add to that. Sorry! – Lenniey Apr 05 '17 at 07:58
  • See this PDF from VMWare: http://www.vmware.com/pdf/esx2_NIC_Teaming.pdf they state their "teaming" can enhance single-conversation-speed: `By installing multiple physical NICs on a server and grouping them into a single virtual interface, users can achieve greater throughput and performance from a “single” network connection.` But yes, that's proprietary and not what the standard defines. Learned something again – Lenniey Apr 05 '17 at 08:04
  • @DavidSchwartz yes, I know, I just wrote that. Just wanted to confirm your answer – Lenniey Apr 05 '17 at 08:33
  • 1
    This is the oldest question in the world. David's answer is correct. LACP does not work this way. You cannot make a single TCP stream go faster than the speed of a single slave. Like he said you can use Round-Robin and "brute force" faster than slave speed, but you'll have heavy TCP Out-of-Order and eventually will be limited in speed. In my experience this runs out at about 1.8x slave speed. If you need faster than 1Gbps then get faster NICs and faster network infrastructure. – suprjami Apr 15 '17 at 11:31
0

The way to get more throughput with LAG/LACP is by using SMB multichannel, HOWEVER it is not considered stable even under the latest SAMBA release, so can it be done? Yes, via SSH to your Synology, modify the samba config file and restart the service or reboot and also verify with powershell on your machine that multichannel is enabled and in use during a file transfer. NOTE: Realize that you are potentially putting your data at risk IF you choose to do this because it's unsupported by Synology since this feature is not considered "stable" yet by samba. Best way around this is get a model with a 10Gbe card but your switch also doesn't have SFP+ ports, just SFP which is also 1Gbe so you would need another switch as well or just directly connect to your machine with another 10Gbe NIC.

Forum LINK

Brad
  • 250
  • 1
  • 11