Several days ago I found my DS412+ in fatal state. Volume1 crashed, system volume too. Moreover, Volume2 disappeared from the system! It looks like Volume1 had no free space and cannot transfer data from a couple of bad blocks to a new place and that hurt the system data. (it's just a theory).
I managed to return Volume1 back to life using the procedures described here (e2fsck, mdadm reassemble
). BTW have to mention new syno_poweroff_task
command that simplifies the process!
Then I restored system volume using Synology GUI. Everything started working OK except that I cannot restore Volume2. It was RAID1 array consist of 2 disks of the same size. This is excerpt from /etc/space_history*.xml
of the date right before the crash:
<space path="/dev/md3" reference="/volume2" >
<device>
<raid path="/dev/md3" uuid="927afd83:*" level="raid1" version="1.2">
<disks>
<disk status="normal" dev_path="/dev/sdc3" model="WD30EFRX-68AX9N0 " serial="WD-*" partition_version="7" slot="1">
</disk>
<disk status="normal" dev_path="/dev/sdd3" model="WD30EFRX-68AX9N0 " serial="WD-*" partition_version="7" slot="0">
</disk>
</disks>
</raid>
</device>
<reference>
<volume path="/volume2" dev_path="/dev/md3">
</volume>
</reference>
RAID members (/dev/sdc3 and /dev/sdd3) are still on their places, and it looks like they are OK, at least /dev/sdc3.
DiskStation> mdadm --misc --examine /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 600cff1e:0e27a96d:883007c3:610e73ef
Name : DiskStation:3 (local to host DiskStation)
Creation Time : Thu Mar 19 22:21:08 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 5851088833 (2790.02 GiB 2995.76 GB)
Array Size : 5851088512 (2790.02 GiB 2995.76 GB)
Used Dev Size : 5851088512 (2790.02 GiB 2995.76 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : f0b910a0:1de7081f:dd65ec22:a2a16d58
Update Time : Thu Mar 19 22:21:08 2015
Checksum : a09b6690 - correct
Events : 0
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing)
I've tried a lot of tricks with mdadm, in many forms like this:
mdadm -v --assemble /dev/md3 /dev/sdc3 /dev/sdd3
mdadm --verbose --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc3 /dev/sdd3 --force
mdadm --verbose --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc3 missing
All of them resulting in something like that:
mdadm: ADD_NEW_DISK for /dev/sdc3 failed: Invalid argument
Is there any chance to restore RAID volume? Or is there any chance to restore data from the volume? For example, mounting /dev/sdc3 member directly?
More mdadm info:
DiskStation> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb3[0]
2925544256 blocks super 1.2 [1/1] [U]
md1 : active raid1 sdb2[0] sdc2[1]
2097088 blocks [4/2] [UU__]
md0 : active raid1 sdb1[2] sdc1[0]
2490176 blocks [4/2] [U_U_]