2
I am trying to remount a Synology raid 0 with 3x2GB drives. As far as I know, the drives are in healthy condition.
I simply don't have enough space to do a full image of these before I proceed, so I hope more experienced users can assist me. The data is certainly not the most important stuff I have, but naturally it would be great if I could recover it.
Sorry for such a massive post, but I figured it was best to give you as much info as possible.
Here is what I have tried so far:
#sudo fdisk -l
(Just listing my raid drives)
Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00098445 Device Boot Start End Sectors Size Id Type /dev/sda1 256 4980735 4980480 2,4G fd Linux raid autodetect /dev/sda2 4980736 9175039 4194304 2G fd Linux raid autodetect /dev/sda3 9437184 3906824351 3897387168 1,8T fd Linux raid autodetect Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0004a6ce Device Boot Start End Sectors Size Id Type /dev/sdb1 256 4980735 4980480 2,4G fd Linux raid autodetect /dev/sdb2 4980736 9175039 4194304 2G fd Linux raid autodetect /dev/sdb3 9437184 3906824351 3897387168 1,8T fd Linux raid autodetect Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x000d3273 Device Boot Start End Sectors Size Id Type /dev/sdd1 2048 4982527 4980480 2,4G fd Linux raid autodetect /dev/sdd2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sdd3 9437184 3906824351 3897387168 1,8T fd Linux raid autodetect
Then the output of mdstat:
#sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : inactive sdb3[2](S) sda3[0](S) sdd3[1](S) 5846077648 blocks super 1.2 unused devices: none
Then result of examining the #3 partition on each drive:
#sudo mdadm --examine /dev/sda3
/dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a55ec236:a21e68d6:880073e6:0767672b Name : Rackstation:3 Creation Time : Sun Oct 29 20:55:08 2017 Raid Level : raid0 Raid Devices : 3 Avail Dev Size : 3897385088 (1858.42 GiB 1995.46 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 97599be0:421d7434:27cf35e3:3738cb20 Update Time : Wed Dec 6 10:30:37 2017 Checksum : 971d4c1e - correct Events : 2 Chunk Size : 64K Device Role : Active device 0 Array State : A.A ('A' == active, '.' == missing, 'R' == replacing)
#sudo mdadm --examine /dev/sdb3
/dev/sdb3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a55ec236:a21e68d6:880073e6:0767672b Name : Rackstation:3 Creation Time : Sun Oct 29 20:55:08 2017 Raid Level : raid0 Raid Devices : 3 Avail Dev Size : 3897385088 (1858.42 GiB 1995.46 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 3d7f416b:da7acb4c:db31ee99:2d7c160d Update Time : Wed Dec 6 10:30:37 2017 Checksum : dd1e7607 - correct Events : 2 Chunk Size : 64K Device Role : Active device 2 Array State : A.A ('A' == active, '.' == missing, 'R' == replacing)
#sudo mdadm --examine /dev/sdd3
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : a55ec236:a21e68d6:880073e6:0767672b Name : Rackstation:3 Creation Time : Sun Oct 29 20:55:08 2017 Raid Level : raid0 Raid Devices : 3 Avail Dev Size : 3897385120 (1858.42 GiB 1995.46 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a5b79e82:0c39533c:4f4adae0:540dafdf Update Time : Sun Oct 29 20:55:08 2017 Checksum : fe189076 - correct Events : 0 Chunk Size : 64K Device Role : Active device 1 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm reports that the raid is clean, which is good I guess?
I then followed the advice here: re-mount-two-old-disk-from-raid0-setup-to-recover-data
#sudo mount /dev/md127 /mnt/oldData
mount: /mnt/oldData: can't read superblock on /dev/md127.
Then googled some more and ran:
#sudo mdadm -D /dev/md127
/dev/md127: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Name : Rackstation:3 UUID : a55ec236:a21e68d6:880073e6:0767672b Events : 0 Number Major Minor RaidDevice - 8 51 - /dev/sdd3 - 8 19 - /dev/sdb3 - 8 3 - /dev/sda3
Can anyone read anything from this? Help is greatly appreciated.