Why every OS still can't resume file transferring?

5

3

I mean, Windows 7, Ubuntu 9.10 and Snow Leopard... All newest and top of line desktop operating system still use regular copy to transfer files, as it seems to me from experience.

Instead of using some technique like rsync, sftp, or whatever is used to backup time machine, when you want to copy a file over network, or large amount of data through USB, or even to a really big pen drive, you have the eminent risk of having to start all over again, if you want convenience of course.

So, why they insist on going like that?

edit: since this still got no answer even today, I'm bringing it to most relevant discourse site I could find: http://discuss.howtogeek.com/t/why-every-os-still-cant-resume-file-transferring/16832

cregox

Posted 2010-02-17T02:58:10.320

Reputation: 5 119

Question was closed 2014-06-18T08:40:07.587

4Well KDE4 partially does resumes - you can pause it, but unfortunately if you copy it again, it can't resume from where the transfer got cut off. – Sathyajith Bhat – 2010-02-17T21:34:05.990

1At least that shows somewhere someone who can do something about it is actually trying to. :) – cregox – 2010-02-17T23:18:12.177

Hey, cool, this became "community wiki". Diago, mind telling me why? This is new to me! :) And also curious on what would happen, from the system point of view, if I "started a bounty". I won't. I have no reason to. Just want to know. – cregox – 2010-02-20T00:24:13.380

1@Cawas: Diago made this CW at Feb 18 at 3:04. I would guess he did so because there's no "right" answer. You're asking something that's more of a discussion, which means it should be Community Wiki. You can still start a bounty to get more answers, but up/downvotes here now have no effect on posters' rep. – Josh – 2010-02-23T14:45:18.427

@Josh Yeah, I agree. Thanks to you both @Diago - This have no right answer indeed, but I'll mark one if it's good enough - No guesses nor advices on what to do instead. I was clearly not asking about alternatives. I wanted to know if there's an underlining technical issue that's so hard to overcome, or if it's just about politics and economics. – cregox – 2010-02-23T19:44:05.253

As an instance, @Santhya comment was the closest answer so far. – cregox – 2010-02-23T19:45:01.040

Answers

7

Probably because there are downsides to those like lots of I/O operations and a performance hit.

A straight copy is probably better off performance, I/O, and system wise.

Josh K

Posted 2010-02-17T02:58:10.320

Reputation: 11 754

3Really? I don't agree. What if I'm coping a 4.7 GB DVD image to a network share and 75% of the way through I lose my net connection? Wouldn't it be much faster to just copy the remaining 25%? – Josh – 2010-02-17T03:22:51.393

8@Josh: I'll give you long odd that (for "users" rather than for people whose job is computers) most copies are small files; that most operation occur between locally mounted, reliable devices; and that those small, local moves total more than half the bytes copied. So the protection for big copies is not worth the extra overhead on the bulk of operations. As a stopgap you can use a more sophisticated method for you big, vulnerable copies. – dmckee --- ex-moderator kitten – 2010-02-17T03:37:05.683

There are downsides, but c'mon, systems nowadays can learn and be smarter than this. – cregox – 2010-02-17T03:49:29.420

I'm not arguing that yes it would be faster in that case, I'm saying in the long run the vast majority of copies benefit from a straight copy. – Josh K – 2010-02-17T13:35:19.040

@Cawas: Where do you want this partial move stored? It's taking up space and other resources. What happens when people start partially moving files and then forget about them? – Josh K – 2010-02-18T03:47:33.473

@Josh K: I'm talking in general, really. I bump in this issue from time to time, and then I have to go and look for a software to do the transfer right. I just find it amazing no OS took care of this issue yet, specially now with all the big and cheap external hard disks, wireless networks being able to transfer bigger and bigger files, and so on. -- As for the "forgetting moved partial files" that's quite easy. Partial files are marked as such. You either delete it or resume it. It's better than having no option. – cregox – 2010-02-20T00:28:05.913

@Cawas: But still you're running into the issue of managing this. Copies aren't designed to be partially terminated. SFTP would require sending this internally to a local server. That's a large hassle to deal with in the OS. You'd have to setup the server, set paths, run the copy, and close everything down. All so you don't lose a bit of time?

With disk speeds on the rise and less information being actually stored on disk I wouldn't think this is a big issue. – Josh K – 2010-02-20T05:39:50.660

"all so you don't lose a bit of time" is not just a bit of time at all! But even if it was 1 minute, yes. That's what systems are made for, save us time. I believe that's also why computers even exist. I was not saying it is a big issue, but it is. Only true solution we have today for transferring large amount of data other than on Internet is by NOT doing so. Sure, you can use internet tools, but instead, we do it in little chunks. Keeping sync day by day or with lots o patience. Look at Sterling Commerce. It's a big business who started with offering expensive solutions exactly for this. – cregox – 2010-02-20T17:31:19.783

Computers exist to make out lives easier. Not more complicated. There's a certain level of internal "thinking" I will tolerate. I have never lost a file during a copy or lost time. It just doesn't happen that often. There is no "eminent risk" in copying files and no real reason to change. – Josh K – 2010-02-20T18:59:48.113

The complicated system can always be hidden from the end user. Even current filesystems are hardly simple, but yet people have no trouble using them because the implementations are carefully hidden away. – syockit – 2011-03-05T15:02:46.507

3

I am just guessing here... but I suspect it's because of non Super Users. For a less skilled computer user, it might be confusing if they received a prompt stating "This file already exists, do you want to: [Cancel] [Overwrite] [Append]". Keep in mind that often the proper action is replacing, people will be coping a newer version of a document to a USB drive to update the drive, or doing a backup, and thus wanting to overwrite the previous version with the new version.

Just a few thoughts... not being an OS developer I can't say for sure ;-)

Josh

Posted 2010-02-17T02:58:10.320

Reputation: 7 540

I don't think there are many OS developers out there. ;) Windows is already too confusing for most less skilled users - I think there should just be a way to at least enable such a feature. – cregox – 2010-02-17T03:48:38.330

One suggestion would be to introduce the file hash to the header. Then, the OS can compare both, and if found to be identical, prompt the user to resume copying. For files with different hashes but same name, the traditional "Overwrite/Cancel?" dialog can be used. Of course, file hash can be cumbersome to generate for very large files. – syockit – 2011-03-05T15:05:28.873

3

If you are transferring large files over an unreliable network, use rsync with the --partial option. Your use case is rather rare. The share on one system I use has a transfer that takes over a day, with no threat of lost data due to an incomplete transfer (it is slow for some reason to look into, sometime).

Edit: @Cawas: the more common use case is copying over a file with a new version of the file. If a file has been modified (made bigger in this case), trying to append the extra length of the new file to the old file will result in a corrupt file. Protocols like rsync and ftp can assume you are not doing this.

casualuser

Posted 2010-02-17T02:58:10.320

Reputation: 141

That's cool. I'm just bothered on why options for file transferring breaking isn't default on OS - even if just for specific transfer operations. Or at very least it could give an option to enable "resuming" every time it identifies the transfer is big enough to break. – cregox – 2010-02-17T03:47:21.390

3

At least for Windows there is a remedy, one of the reasons i'm using TeraCopy: it can resume broken file transfers.

Molly7244

Posted 2010-02-17T02:58:10.320

Reputation:

Thinking again, I shouldn't accept an answer just due to lack of better ones. ;) – cregox – 2014-06-19T21:40:14.323

1That's the best answer so far, from my point of view... But it's really off topic! :P – cregox – 2010-02-20T00:28:45.693

1

The reason why this feature has never been implemented is right here in front of your eyes. You just need to read through the comments to see that it is an extremely unpopular feature.

Add the overhead required for such operation, and there you have your answer: it is not going to be implemented as a default feature on any major OS anytime soon.

Bruno9779

Posted 2010-02-17T02:58:10.320

Reputation: 1 225

*Popularity does matter at least between the programmers* - that I agree. I don't think it's that unpopular among programmers, it's just most of them who are not satisfied with it at least know about rsync. And I bet most would appreciate it if we could have it built in the system. You still talk as if the overhead was huge... We have so many interpreted languages nowadays which have some "overhead" over C, but they still exist. I don't think there is a significant overhead in having this implemented... Maybe just in implementing it and updating it. – cregox – 2014-06-19T21:44:18.427

Thanks for resurrecting this! ;-) I don't know, that seems all pretty reasonable but unsustained. How much overhead is there, really? Popularity is not exactly a matter here, especially for non-windows. – cregox – 2014-06-17T16:52:14.897

Popularity DOES matter. At least between the programmers that are supposed to develop the feature. With the right rationale it is feasible. Eg apply only for files larger than "x", when the transfer is slower than "y", or when the extimated transfer time is more than "z". And keep the partial files for a maximum of "J". But it still sounds like a very specific feature, not likely to be developed/implemented. – Bruno9779 – 2014-06-17T16:59:04.403