What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

8

7

I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it.

I want to copy everything to an external USB hard drive (1.5 tb capacity).

The desktop PC runs Ubuntu. To begin with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive.

It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data.

So I'm wondering if there's a quicker way of doing this.

Would it be better to copy it in sections (i.e. 500MB or so) rather than all at once?

jonathanconway

Posted 2009-08-22T08:49:11.497

Reputation: 485

Answers

10

Make sure that you are using a USB2.0 port and make sure the USB 2.0 controller says "high speed" -- a lot of disreputable manufacturers sell the "full speed (12)" USB devices with a prominent "USB 2.0" label on them, which is technically accurate but fools people who think USB 2.0 implies "High speed".

Also check if your USB hard disk has the "sync" mount option enabled? This is another cause for slowdown. You can remount the filesystem with

mount -o remount,async... /dev/usbdisk ...

rkthkr

Posted 2009-08-22T08:49:11.497

Reputation: 401

+1 .. 500MB/10min = 50MB/min = 5MB/6sec = 0.83MB/s = 6.7Mb/s. it's not even hitting full speed. – quack quixote – 2009-10-10T13:03:58.963

10

Whatever is your interface to the harddisk you should use rsync to copy the data, it can resume transfers and files individually (--partial) , it gives you progress and checksums are verified on destination media.

In short:

rsync -avP src/ dst/

If you are transferring through a network interface add the -C argument to enable compression, in most cases you will be bandwith-bounded so it will not hurt performances even if the content is already compressed.

If you can you should tar your data before transferring it, it will relieve the filesystem of having to create many files, add timestamps, allocate space... on every file.
You'll likely see an improvement in speed of copy.

Shadok

Posted 2009-08-22T08:49:11.497

Reputation: 3 760

1I know that I'm late to this one, but rsync really saved me. I had to recover some data for a customer and the file copy within my Xubuntu system was locking up (old machine, old drives). RSYNC powered right through the file copy. I used slightly different options. rsync source dest -r -v --ignore-existing – pStan – 2015-01-15T15:21:35.793

9

If I had a really huge amount of data to copy, I'd whip the drive out of the external enclosure and put it internally in the computer, and put it back in the enclosure when I was done. I keep a couple of SATA cables and drive bays open for just that sort of need. Opening up the enclosure and reclosing it afterwards it time consuming, but the copy itself will be miles faster.

Paul Tomblin

Posted 2009-08-22T08:49:11.497

Reputation: 1 962

3

You didn't mention what file-system was on the USB disk. Is it a Linux native file-system or are you using ntfs/fat32? I think if your having to go through fuse it will cost you in performance.

To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive.

If you have that much data to copy, personally I would skip the GUI since that is going to add overhead to your copy operation. Instead I would use one of the many CLI commands (cp, rsync, cpio, tar, etc) for copying files.

Would it be better to copy it in portions of 500MB or so, rather than all at once?

If you are working with something like rsync, there should be no reason to copy files in small sets.

Zoredache

Posted 2009-08-22T08:49:11.497

Reputation: 18 453

you're right, ntfs performance on linux is sloooooow. – quack quixote – 2009-10-10T13:06:48.073