I'm downloading a large file over http
via wget
, 1.2TB. The download takes about a week and has contained corruptions twice now (failed md5 check, which takes days to run by itself).
Is there a good way to validate the file piecemeal over http using say curl
? Or to break it into separate blocks such that I could identify a specific bad block and redownload just that section?
The file is a tar
archive, so I believe corruptions per block could be identified sequentially during unpacking.