I've a question which bothers me since quite some time now. I have several environments and servers. All of them are connected to 1 Gbit ethernet swiches (e.g. Cisco 3560).
My understanding is, that a 1 Gbit link should provide 125Mbyte/s - of course this is theory. But at least it should reach ~100Mbyte/s.
The problem here is, that one copy process only reaches ~20Mbyte/s.
I'm aware of these factors but they don't make any difference:
- Source and Destination on the same switch or not
- Copy utility: SCP, Rsync, Windows copy, Windows robocopy
- SMB/CIFS, NFS, iSCSI
- Disk storage: NetApp FAS, locally attached 15k SCSI
With all of these configurations, I never get more than ~25Mbyte/s throughput. The thing is, that if I start multiple parallel copy streams, like 3 times an rsync, i almost reach 90Mbyte/s. I also did some IOMeter tests and found out that the "chunksize" makes a huge difference, but is normally not tunable with the given tools (or is it?).
Jumbo frames are not enabled but I'm unsure if it would make a difference. TOE is enabled on all NICs.
What are the bottlenecks you would think about? Do you have similar experiences? Are these the expected "natural" values?
Thanks in advance