Write HFS+ formatted drive image to smaller capacity drive on Linux

0

I imaged a potentially defective 1TB hard disk drive containing about 270GB worth of actual data, using ddrescue, on a Lubuntu live system. The recovery is 99.9% complete, there's only a 52KB area which was unreadable near the 300MB mark – yet there are no “pending” or “reallocated” sectors in SMART. First question : how is this possible ? Could this be a benign case of “logical” bad sectors, i.e. sectors which are physically still operational but in an inconsistent state, resulting in a CRC check failure, and could they be “fixed” durably and reliably simply by overwriting them ? I ran the short self-test, which was “completed with read failure”. Can I still trust SMART data 100% and be confident that if it reports no bad sector there indeed is none at the physical level ?

Then, I have 3 spare drives which I could use to transfer the recovered data for the owner, who uses a MacBook computer : a 320GB drive in USB2, a 500GB drive in USB3, a 1TB drive in USB3. The source drive is formatted in HFS+. Is there a safe and convenient way to write that 1TB image, which indeed occupies only about 270GB (as it was created in sparse mode using ddrescue’s -S switch), directly to a smaller capacity drive, with a Linux or Windows free tool, in such a way that the recovered HDD is readily readable, with a consistent partition table ? (I have no experience with Apple partitioning and formatting schemes.) Or would I be better off creating a HFS+ partition – with which tool since apparently GParter can't handle that – and copying the files and folders ? But in this case, would the timestamps and other metadata be automatically preserved ? Or would I have to use a specific method to make sure of it ? Can a Linux command like “cp” copy files between HFS+ partitions and preserve all attributes specific to that filesystem ?

Thanks.

GabrielB

Posted 2018-07-09T13:14:58.707

Reputation: 598

I presume you imaged the disk to a file, and stored a log file too? Did you run ddrescue again, providing the log/map file? It's possible that the unreadable area could be successfully read if you try again enough times. – Attie – 2018-07-09T13:25:49.513

Writing an image with a hole to a new disk is going to cause problems at some point down the line - I'd recommend against it... mount the filesystem using the image (not the disk), and copy the files to a new filesystem on a new disk using rsync or similar. Timestamps should be handled, but other Apple-specific metadata may be left out. – Attie – 2018-07-09T13:29:34.717

Yes I used a log file, but no, this area was always (I retried a few times to extract the first GB) treated as “error” – and right away, with no delay or slowdown, as it usually happens when a “physical” bad sector is encountered, which would seem to confirm the “logical” bad sector hypothesis. – GabrielB – 2018-07-12T11:10:33.993

Well it's not really a “hole”, it's just a small area left blank, right ? I found out, using ddru_findbad from ddr_utilities, that the 104 unreadable sectors were located in the “/.journal” file. If it's like $LogFile / $UsnJrnl in NTFS it should jeopardize the partition's integrity, right ? Otherwise : GParted can't create a HFS+ partition. Is there a free tool that can ? – GabrielB – 2018-07-12T14:11:37.247

Answers

0

So I did what my intuition told me : I attempted to overwrite just the tiny unreadable area with this ddrescue command (could be done with the more basic dd tool but I'm less familiar with it) :

lubuntu@lubuntu:~$ sudo ddrescue -o 312881152 -s 53248 -f /dev/zero /dev/sdb /media/lubuntu/354E48E260FCFD84/dev_zero_dev_sdb.log

[Note : the -f switch is necessary here since there is natively a protection preventing ddrescue from writing directly to a physical device.]

And it worked : as a verification, I re-imaged the first GB and this time there was no error (I had tried this partial imaging before running the above command and the error area was still there then, with the exact same location and size, I also noticed that it was skipped right away, with no slowdown, contrary to what usually happens when there's an actual “physical” bad sector and it slows down or hangs for a few seconds before skipping) ; the “short self-test” now completes with no error as well.

Before that I tried some Windows tools : a read scan with Hard Disk Sentinel made it freeze indefinitely, I had to shut down the drive ; likewise, trying to access the problematic area with WinHex made it freeze until the drive was shut down.

So, am I correct that this was a case of “logical” bad sectors, and that the drive is physically fine, and safe to use again, as there has been no “pending” or “reallocated” sector displayed in S.M.A.R.T. at no point of the process ? What is the likely cause of this, perhaps a write operation interrupted by an improper shutdown ? Is this a common issue, and does it commonly render the drive inoperant, when it affects a system file ?

GabrielB

Posted 2018-07-09T13:14:58.707

Reputation: 598