0

Our server recently suffered a disk failure, so our hosting provider put a new Disk in with CentOS so we can login and see if we can recover the data.

We had 2 x 120 GB SSD drives in software RAID0 config - our host set this up for us; we didn't do it ourself as we lack the "know-how" - hence why I am here posting this.

Here's the output of fdisk -l - http://i.imgur.com/2RtluAU.png

SDB and SDC are the raid drives.

I tried the following commands to mount the drives: http://i.imgur.com/J7eM7i5.png

I did some digging around and found that for software raid, you can use the mdadm tool to scan / assemble automatically, but this didn't work either:

[root@localhost ~]# mdadm -A --scan
mdadm: No arrays found in config file or automatically

I tried running the examine option and this is the output:

[root@localhost ~]# mdadm --examine /dev/sdb
/dev/sdb:
   MBR Magic : aa55
Partition[0] :       204800 sectors at         2048 (type 83)
Partition[1] :     20480000 sectors at       206848 (type 83)
Partition[2] :      8192000 sectors at     20686848 (type 82)
Partition[3] :    435964672 sectors at     28878848 (type 05)
[root@localhost ~]# mdadm --examine /dev/sdb1
mdadm: No md superblock detected on /dev/sdb1.
[root@localhost ~]# mdadm --examine /dev/sdb2
mdadm: No md superblock detected on /dev/sdb2.
[root@localhost ~]# mdadm --examine /dev/sdb3
mdadm: No md superblock detected on /dev/sdb3.
[root@localhost ~]# mdadm --examine /dev/sdb4
mdadm: No md superblock detected on /dev/sdb4.

[root@localhost ~]# mdadm --examine /dev/sdc
mdadm: No md superblock detected on /dev/sdc.
[root@localhost ~]# mdadm --examine /dev/sdc1
mdadm: cannot open /dev/sdc1: No such file or directory

and here's the output of cat /proc/mdstat

root@localhost ~]# cat /proc/mdstat
Personalities :
unused devices: <none>

I had a similar issue before and I already asked about it before - re-mount two old disk from raid0 setup to recover data

Last time I somehow managed to fix it because the array was 100% clean and I managed to mount it by running this command: mkdir /mnt/oldData && mount /dev/md127 /mnt/oldData but this time the issue appears to be different. There is no /dev/md* - see this -> i.imgur.com/EMxrwOx.png

Can someone help?

Latheesan
  • 347
  • 2
  • 6
  • 18
  • 2
    You have had two RAID0 failures, perhaps it is time to use a RAID level that doesn't almost guarantee you lose your data. – Zoredache Oct 10 '13 at 16:17

1 Answers1

1

Last time you had the old disks to get the data. If you do not have the old disk that failed, you are literally missing half of your data. You will not be able to recover this data. You can try to send both the disks offiste to a data recovery company if your data is important enough. Even if you have access to the old disks, if you can't mount it, you cant get to the data.

Have your host reconfigure your array to use something other than RAID0 and restore from backup (you do have backups, right?). Secondly, find a new host that won't even touch RAID0 in a production system.

Rex
  • 7,815
  • 3
  • 28
  • 44