0

I've a question which bothers me since quite some time now. I have several environments and servers. All of them are connected to 1 Gbit ethernet swiches (e.g. Cisco 3560).

My understanding is, that a 1 Gbit link should provide 125Mbyte/s - of course this is theory. But at least it should reach ~100Mbyte/s.

The problem here is, that one copy process only reaches ~20Mbyte/s.

I'm aware of these factors but they don't make any difference:

  • Source and Destination on the same switch or not
  • Copy utility: SCP, Rsync, Windows copy, Windows robocopy
  • SMB/CIFS, NFS, iSCSI
  • Disk storage: NetApp FAS, locally attached 15k SCSI

With all of these configurations, I never get more than ~25Mbyte/s throughput. The thing is, that if I start multiple parallel copy streams, like 3 times an rsync, i almost reach 90Mbyte/s. I also did some IOMeter tests and found out that the "chunksize" makes a huge difference, but is normally not tunable with the given tools (or is it?).

Jumbo frames are not enabled but I'm unsure if it would make a difference. TOE is enabled on all NICs.

What are the bottlenecks you would think about? Do you have similar experiences? Are these the expected "natural" values?

Thanks in advance

zero_r
  • 2,345
  • 2
  • 15
  • 16

3 Answers3

1

If it's all a per-stream thing, then you're coming up against the "bandwidth-delay product" problem. Basically, there's a limit to how much data will be "in flight" at any one time (in TCP, that's the "window size"), and for a given round-trip delay, you can't get more than a certain amount of data across in a given time period because the sender has to wait for the recipient to ack the receipt of the data already sent before they can send more. Roughly, your TCP throughput is going to be window size / round trip delay (in seconds).

This isn't just a TCP thing (although I use that example because it's the one with the most literature out there if you want to go looking further). All protocols that wait for acknowledgement before sending more data will suffer from the same problem. In theory, you could just send all the data and not wait for acks, but that's generally considered a bad thing because you can "swamp" the recipient without giving them any way to stop the firehose.

For most protocols, you can tune the window size so that you can have more data "in flight" at once, and some protocols have options you can tweak to reduce the impact of acknowledgements, but they all have tradeoffs you need to think about for your application.

womble
  • 95,029
  • 29
  • 173
  • 228
1

In my experience the bottleneck are always the disks. I never used ISCSI or SAN, thus for met the only way to increase performance is using RAID0 with a dedicated raid card.

lg.
  • 4,579
  • 3
  • 20
  • 20
  • Ok, that might be another reason. But in my case, I see the NetApp storage reading and writing at 200mbyte/s if it copies data internally. But thank you for the hint. – zero_r Sep 24 '09 at 08:38
  • If you use Windows I found this tool (http://www.microsoft.com/whdc/device/network/tcp_tool.mspx) to test network performance...but I didn't use it! – lg. Sep 24 '09 at 09:01
1

The ever-excellent tomshardware.com did a great article on how to achieve 100MBps over a gigabit ethernet link - and as lg says it all comes down to the disks.

Have a read (HERE) and see what you think.

Chopper3
  • 100,240
  • 9
  • 106
  • 238