How to speed up the ftp upload process?

7

3

So why are FTP uploads soo slow? I am using filezila as a client.

I have like 10 mb in 1000 files and I can upload each individual file with 300-500kb/s yet the whole upload is incredibily slow due to the queueing process that occurs as files are uploaded. For every singe file the client performs all kind of commands and connection operations before actually uploading.

Is there no way to skip over these commands? I am new to ftp clients/uploads/websites etc Is this standard practice? Is this just the way ftp uploads work? Don't you get bored waiting like 20 minutes for 8-10 mb?

How can I efficiently upload 100 mb or more?

Cristian

Posted 2011-06-13T17:41:23.013

Reputation:

@afrazier theoretically not. It depends on the nature of the protocol in use, which doesn't support pipelining. For example, pipelined http was designed to reduce latency – usr-local-ΕΨΗΕΛΩΝ – 2014-10-21T20:05:11.910

@djechelon: Pipelining helps, but you still have the problem of command overhead and most pipeline implementations limit the number of active simultaneous requests (because overhead can still swamp the payload eventually). – afrazier – 2014-10-21T20:14:40.403

Transferring a large number of small files is much less efficient for any protocol, not just FTP. It's just much more noticeable over the internet (or a WAN) because the latency makes the command overhead swamp the actual transfer rate. – afrazier – 2011-06-14T17:59:17.967

Answers

9

Sadly, this is the way that FTP functions. To efficiently transfer lots of small files, either archive them locally, transmit the entire archive via ftp, and then unarchive the files on the remote machine, or turn on simultaneous uploads, where the client is uploading 10 files at once. This will help to fully saturate your upload link.

Darth Android

Posted 2011-06-13T17:41:23.013

Reputation: 35 133

with an avg. of 100kb per file they are not small files. – Captain Giraffe – 2011-06-13T17:53:29.980

@Captain Giraffe: On most WAN/Internet links, 100 kB is still small enough that latency swamps throughput when it comes to uploading. – afrazier – 2011-06-14T17:56:09.030

2

Have you tried compressing the files locally then uncompressing them on the server? Then you'd only have to transfer one small(er) file.

If it's applicable, you could also only copy files that have changed since your previous upload. Tools like rsync (if you have ssh access) and robocopy (if it's a windows server) could help you do this.

Sam Johnson

Posted 2011-06-13T17:41:23.013

Reputation: 231

1

I use Auto FTP Manager. It runs multiple simultaneous FTP transfers so the entire bandwidth link can be used:

Auto FTP Manager makes it easy to schedule and automate your FTP transfers. Use Auto FTP Manager to connect to any FTP server and automatically upload and download files. Plan and automate your workflow. Let your PC move or synchronize files between PC to FTP Server, PC to PC, and FTP Server to FTP Server, automatically according to a schedule...

... Auto FTP Manager is multi-threaded, allowing you to open connections to multiple FTP servers at the same time. The program can transfer files in the background while you work on other tasks.

adam0718

Posted 2011-06-13T17:41:23.013

Reputation: 11

1

I know this is an old post, and there isn't a lot you can do at the application layer since FTP relies on TCP for sending the bytes. There are few things though

  1. Enable simultaneous uploads as mentioned above (this helps utilize more of your bandwidth since it takes time for TCP to "ramp up" each connection, so doing it in parallel is more efficient)
  2. Archive the file as mentioned above (again, because it allows TCP to ramp up over time instead of individually for each file)
  3. Check out SuperTCP: it's a new product that (full disclosure) I'm helping build that will optimize TCP upload, which can help FTP go faster in a lot of cases. We're launching a Beta soon and would love to have you help us test it!

ryno2019

Posted 2011-06-13T17:41:23.013

Reputation: 11