3
2
My Promise NAS NS4300N recently died (PSU or motherboard failure, probably the former as it has trouble spinning up the disks).
I've managed to dd(1) the drives (4 500GB drives in a RAID5 configuration) as images on a new server, even though one of the drives had a couple of read errors (conv=noerror ftw...).
However, as the Promise NAS doesn't use mdadm(8) for RAID but instead uses "hardware" RAID (aka FakeRAID), the resulting images look like this:
$ fdisk -l /local/media/promise.dd.1
Disk /local/media/promise.dd.1: 465.8 GiB, 500106174464 bytes, 976769872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb95a0900
Device Boot Start End Sectors Size Id Type
/local/media/promise.dd.1p1 63 2929918634 2929918572 1.4T 83 Linux
$ fdisk -l /local/media/promise.dd.2
Disk /local/media/promise.dd.2: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
$ fdisk -l /local/media/promise.dd.3
Disk /local/media/promise.dd.3: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
$ fdisk -l /local/media/promise.dd.4
Disk /local/media/promise.dd.4: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb95a0900
Device Boot Start End Sectors Size Id Type
/local/media/promise.dd.4p1 63 2929918634 2929918572 1.4T 83 Linux
When mounted as loop(4) devices, the images look like this:
$ sudo lsblk -io NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
..
loop0 promise_fasttrack_raid_member 465.8G
loop1 promise_fasttrack_raid_member 465.8G
loop2 promise_fasttrack_raid_member 465.8G
loop3 promise_fasttrack_raid_member 465.8G
Unsurprisingly, mdadm(8) is unable to read these, as it is unable to find a usable superblock:
$ sudo mdadm --verbose --examine /dev/loop0
/dev/loop0:
MBR Magic : aa55
Partition[0] : 2929918572 sectors at 63 (type 83)
$ sudo mdadm --verbose --examine /dev/loop1
mdadm: No md superblock detected on /dev/loop1.
And of course:
$ sudo mdadm --verbose -A /dev/md127 --readonly --run /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
mdadm: looking for devices for /dev/md127
mdadm: no recogniseable superblock on /dev/loop1
mdadm: /dev/loop1 has no superblock - assembly aborted
I thought I could try to read/examine these using dmraid(8), as this is advertised as a tool to "discover, configure and activate software (ATA)RAID". But as far as I can tell, this statement is only true if the drives are exposed through the BIOS, which these clearly are not, being that they are loop(4) devices:
$ sudo dmraid -ay
no raid disks
Do I have any chance at recovering the data via software? Or is my only option to find hardware that can read physical drives with the data on them (e.g. a Promise PCI card)?
Thanks for reading.
What RAID level we're they using? – davidgo – 2017-05-17T20:23:43.487
The Promise NAS drives were in a 4 disk RAID5 configuration. – thoughtbox – 2017-05-18T13:44:15.817
In the meanwhile, I'd like to add that to circumvent this particular situation, I first tried to mount the disks with a Promise FastTrack 4300 PCI card. This didn't work - probably because I afterwards discovered that the PCI card did not support RAID5. So, what I did in the end was to look at the NAS PSU header. It looked very much like an ATX header. And it was. Powering it using an ATX supply from a normal desktop computer worked. I am recovering the data now. Not really the solution I was looking for, but I lucked out. – thoughtbox – 2017-06-04T13:44:15.243
If you're still interested, the commands Kamil Maciorowski posted in the comments of one of my questions worked for what seems to be a similar situation. If you still are interested, try those commands.
– awksp – 2018-07-16T20:36:30.737@awksp I'm afraid RAID5 makes it more difficult than our case… – Kamil Maciorowski – 2018-07-17T01:19:31.260