I'm downloading from a server and downloads are maxing out at 1.3MiB/second with FileZilla but I can start concurrent downloads and they will download at 1.3MiB/second also. So why can't I download just one file at faster than 1.3MB/s and get closer to saturate available bandwidth (~6+MB/s)?
I know that I can use some other SFTP client that supports segmented downloads such as lftp, know of other good ones that are open source?
But I still want to know what is it that limits downloading one file to just 1.3MB/s, is it some technical limitation with TCP and buffers etc or some configuration issue? I checked and for sure there's no traffic throttling enabled at all for FileZilla.
Also I tried rsync and it was worse than FileZilla/SFTP. I also tried WinSCP and it was the slowest regardless of method SCP/SFTP. So at 1.3MB/s constant transfer FileZilla is pretty good compared to the other methods of transfer.
If someone has a good explanation of why transfers peak at 1.3MB/s I'd really like to know, and if its possible to increase this without resorting to using segmented downloading. Server is running OpenSSH 6.7p1 (Debian) client is FileZilla on Windows.
UPDATE: In response to Martin's information (see his answer below) I am adding that ping is 180ms to 190ms pretty constant between server and client that is downloading. Also cpu usage is very low, 2% to 8% max. I tried with latest version winscp 5.73 and with sftp mode I got 555kb/s and about 805kb/s max with scp mode. Whereas if I start a secondary concurrent transfer in Filezilla I get constant 1.3MiB/s for it also.
So could the 180ms delay to the server be a mathematically limiting factor as Martin and Michael touched on a bit? Or could there still be something else to blame such that I can improve the throughput? If not, I'd appreciate if anyone knows any other (like lftp but runs well on Windows) open source downloader that is secure and supports segmented downloading.