5

When I used RAR, I had an option to add some extra recovery data in order to be able to cope with a not-perfect medium (think floppy disks). It saved my day several times, specially when dealing with old CDs (scratched or when the reflective layer was damaged with bubbles).

I also used Parchive to redistribute several CDs with a "parity" one in a manner not unlike RAID.

So, in the era of DVDs, external HDD and Flash storage :

  1. Is it still worth it ?
  2. Is it only available as a external package like Parchive or only with RAR since various better compression formats exists ?

I don't want a complete solution like the ones asked here, here or here, just ideally something as lightweight as gunzip in usage (stdin/stdout)

Steve Schnepp
  • 2,222
  • 3
  • 21
  • 27

5 Answers5

2

Dar at http://dar.linux.free.fr/ support's parity and a bunch of other options.

As for if it's still worth it? I make 2 copies of everything I back up, and a 3rd for super important stuff (tax returns, mortage stuff, etc). One on an NAS for easy access and 1 on DVD for long term storage. The 3rd copies tend to make their way into my fire safe on DVD in a nice case to prevent scratching.

It comes down to the value of the data your saving.

skitzot33
  • 554
  • 2
  • 2
2

I usually use par2 when dealing with somewhat large (>100MB) files. The extra amount of processing time is worth peace of mind and not that noticable.

Bob
  • 2,917
  • 5
  • 28
  • 32
1

For large volumes I dont think its worth the extra processing overhead of creating parity bits.

Just go with the rule of keeping 'it' in multiple places, disks are cheap, the internet clouds are here, online storage is abundent.

Where possible store on 2 different technologies. In the case that some strange airbourne fungus wipes out all your DVD's you may still be able to recover data from say a disk or online.

jafin
  • 335
  • 1
  • 5
  • 12
0

I suppose it depends on your environment. I occasionally get an optical disk that can't be read, but rare is the time that I've actually lost data because of it.

If there were a solution that was nearly universal, and unobtrusive, and didn't double the amount of space needed, then I think it would be worth it, but it hasn't affected me much, so I don't see the need, even though others might.

When I'm transferring things, I typically do so over the network, and when I don't md5sum tells me whether I got a good copy or not, even before I disconnect the storage.

Matt Simmons
  • 20,218
  • 10
  • 67
  • 114
  • I've had cheap no-name brand disks go bad on me for no reason before, so quality counts. And your right about environment, heat is bad for disks. – skitzot33 Jun 02 '09 at 12:23
0

Redundancy like that is really intended for instances where transmission has failed, and transmission is expensive. If you are sending your archive to mars, please do include redundancy, as retransmitting any or all of it might take a while.

These days, reliable transmission (through retransmissions) is the norm, bandwidth is cheap, and latency generally sufficiently low for most "normal" apps.

In terms of media failure - I would suggest that the amount of redundant data you would need to recover from all but the most trivial failures is too much to be worthwhile. It is fairly unlikely for one or two bits to go wonky at a time, certainly in my experience of disk failure. Basically you'd be storing an extra n-bytes all over the place for a received value of sqrt(fa).

Suggest you put your redundancy elsewhere - RAID, 2 CDs, etc. it is easier and more reliable :)

Tom Newton
  • 4,021
  • 2
  • 23
  • 28