Mounting HFS+ partition on Arch Linux

23

12

I'm having some problems with mounting an hfs+ partition on Arch Linux.

When I run sudo mount -t hfsplus /dev/sda2 /mnt/mac I get this error:

mount: wrong fs type, bad option, bad superblock on /dev/sda2,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Running dmesg | tail gives:

[ 6645.183965] cfg80211: Calling CRDA to update world regulatory domain
[ 6648.331525] cfg80211: Calling CRDA to update world regulatory domain
[ 6651.479107] cfg80211: Calling CRDA to update world regulatory domain
[ 6654.626663] cfg80211: Calling CRDA to update world regulatory domain
[ 6657.774207] cfg80211: Calling CRDA to update world regulatory domain
[ 6660.889864] cfg80211: Calling CRDA to update world regulatory domain
[ 6664.007521] cfg80211: Exceeded CRDA call max attempts. Not calling CRDA
[ 6857.870580] perf interrupt took too long (2503 > 2495), lowering kernel.perf_event_max_sample_rate to 50100
[11199.621246] hfsplus: invalid secondary volume header
[11199.621251] hfsplus: unable to find HFS+ superblock

Is there a way to mount this partition?

EDIT:

Using sudo mount -t hfsplus -o ro,loop,offset=409640,sizelimit=879631488 /dev/sda2 /mnt/mac gets rid of hfsplus: invalid secondary volume header in dmesg | tail

ZuluDeltaNiner

Posted 2015-08-23T05:40:28.017

Reputation: 363

Answers

36

It's likely that the HFS volume is not mounting because the HFS partition is wrapped in a CoreStorage volume (the default since OS X 10.10). You can verify if this is the case with the output of fdisk -l: fdisk output

HFS+ uses two volume headers, one 1024 into the device and the secondary 1024 from the end of the device. Per the spec, when mounting a partition the secondary header is expected to be to be exactly 1024 bytes from the partition's end, but with CoreStorage wrapping the HFS volume that's no longer the case so it aborts. You can pass -o sizelimit=N to mount to manually specify the HFS volume size and fix this, but how does one get the magic value for N?

The testdisk utility can scan for partitions, hinting at where the HFS partition really ends. Be wary - selecting the wrong options in testdisk can damage your partition table!

  1. Launch TestDisk with testdisk /dev/sdX, and then OK to select the drive
  2. Select Intel for MBR or EFI GPT for GPT formatted drives
  3. Press Analyse and then Quick Search
  4. After a few moments it should print it the partitions found: testdisk results

    The partition indicated looks awfully close to (but slightly smaller) than the real partition size of 623463232 sectors reported by fdisk -l earlier.

    Because the TestDisk output uses sectors, we'll need to multiply it by the drive's logical sector size (typically 512 or 4096 bytes) to get the HFS volume size in bytes. That's the value for N we'll use for -o sizelimit=N when mounting the HFS volume.

    If you don't know your drive's logical sector size, check the output of the second first number reported by fdisk -l on the line shown below: finding your disk's logical sector size

  5. Press q several times to exit the program

  6. Mount the disk: mount /dev/sdXn -t hfsplus -o ro,sizelimit=N

Stewart Adam

Posted 2015-08-23T05:40:28.017

Reputation: 553

3

From user edmonde: This recipe worked great for me, but I had to tweak it using the logical sector size (the first of two numbers, in my case 512 versus 4096) as opposed to the physical sector size to calculate the total volume size. I'm not sure why but it worked great.

– fixer1234 – 2016-07-06T18:51:44.007

This fixed my problem. Other resources suggested using an offset parameter, which didn't work when combined with this, but using only sizelimit set to the number of bytes (bytes * sectors) worked like a charm, even for non-CoreStorage partitions – cdeszaq – 2016-10-16T16:58:35.970

This doesn't work for me. I get mount failed: Unknown error -1 and nothing in dmesg. hfsplus is definitely loaded. – Dan – 2017-05-09T16:19:18.580

+1 fixed by using logical sector size – Jake – 2017-10-08T05:42:30.047

1This solution was working fine for me till after an update on OSX which stopped this working. Anyone else had this issue? Any advice? – Vik – 2019-04-15T00:05:48.750

Same problem as @Vik; not working here with partitions created under mac osx Mojave. The analysis performed by testdisk does not seem to make sense. – Freddo – 2019-11-18T21:30:47.640

2

Another option is to get rid of CoreStorage if an OS X machine is available to you. This would also get rid of decryption if you're using it and you would have to wait until the decrypt is finished (plugged into power and booted into OS X, even recovery).

You would need to boot to a disk that isn't the one in mind, preferably internet recovery (if available, command-option-r on reboot). Open up the terminal and do a:

diskutil cs list

The output should show your CoreStorage volumes and all, one of them is its Revertible status. If it indicates Yes then you'll be in good shape to proceed. Next you would run:

diskutil cs revert /dev/ diskXsY

(Where X is the disk number and Y is the partition number).

You can check its status afterwards with the same "diskutil cs list" command. If it wasn't encrypted it should already be back to a standard GPT partition layout and you can try to mount it again in Arch. It should still be journaled which will keep it read-only, if you want to toggle that you can do so in Disk Utility.

If it was encrypted the process will take a while but "diskutil cs list" will show you the progress as a percentage.

I've had no issues mounting non-CoreStorage HFS+ drives and partitions on Arch myself. I did eventually move the data, repartition as ext4 and move the data back to them.

Cory T

Posted 2015-08-23T05:40:28.017

Reputation: 111