1

today I had problems accessing the SMB-share on my Linkstation. My NAS has two hard drive disks, configured as raid0. this raid was mounted to /mnt/array1/intern - the folder that i am missing.

My first problem is, I really dont know, where to look for some error reporting.

Lets start with /var/log/messages, its says:

/usr/local/bin/hdd_check_normal.sh: mount -t xfs /dev/md2 /mnt/array1 failed.

Ok. I googled this message and tried the following:

cat /proc/mdstat
md2 : inactive sda6[1](S)
      1938311476 blocks super 1.2

md1 : active raid1 sda2[1]
      4999156 blocks super 1.2 [2/1] [_U]

md10 : active raid1 sda5[1]
      1000436 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sda1[1]
      1000384 blocks [2/1] [_U]

unused devices: <none>

OK... from df -h i know, that md0 is my boot-partition and md1 is the root-partition. i guess md2 is my missing raid - but whats raid10 for? However, i tried to refresh mdadm's configuration and reassembling the raids with:

mdadm --examine --scan > /etc/mdadm.conf
mdadm --assemble --scan -v

This leads to a couple of error messages, like:

cannot open device /dev/md/1: Device or resource busy
mdadm: /dev/sda2 has wrong uuid.
mdadm: no RAID superblock on /dev/mtdblock0

for sda, sda1, sda2, md/1, md/2 and son on. Its around 50 lines, I dont want to post them all. What i dont understand is "wrong uuid" - didn't i recently added the current UUIDs to mdadm.conf?

Back to my /var/log/messages i found a script. i tried to manually start them, hoping to get some more error messages:

/etc/init.d/start_data_array.sh

It gives me a whole bunch of messages, the most important are - IMHO:

mount: mounting /dev/md2 on /mnt/array1 failed: Input/output error
umount: forced umount of /mnt/array1 failed!
umount: cannot umount /mnt/array1: Invalid argument

So, the problem that i have is, as far is i know, something is wrong with my raid0-array named md2.

The main question is: What is wrong? How do I activate /dev/md2? (mdadm --detail /dev/md2 gives "device is not active?) Do i manually have to re-create the array? Will i lose my data?

The error, that this device is not active seems kind of generic to me, when looking it up i find a lot of posts and advices that are not really related to my problem.

Any help is appreciated, thanks a lot!

// UPDATE

Its getting strange - for me. This is what fdisk -l is saying for /sda and /sda6:

root@OoompaLoompa:~# fdisk -l /dev/sda

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1      243202  1953514583+ ee EFI GPT
Disk /dev/sda6: 1984.8 GB, 1984832000000 bytes
255 heads, 63 sectors/track, 241308 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda6 doesn't contain a valid partition table

/sda6 has no partition table, because its party of my array, i guess. /sda has an partition table, but no superblock:

mdadm --examine /dev/sda
mdadm: No md superblock detected on /dev/sda

But its one of the 2 GB hdd. I am really confused. This is the output of --examine for both of those devices:

/dev/sda1:
        mdadm: No md superblock detected on /dev/sda.
/dev/sda6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 41e67f06:3b93cda0:46ac3bd7:96702dae
           Name : UNINSPECT-EMC36:2
  Creation Time : Thu Oct 18 01:43:39 2012
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 3876622952 (1848.52 GiB 1984.83 GB)
  Used Dev Size : 0
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 1dd1c5c5:220d14bf:5b0b1fc5:dbcc3f9c

    Update Time : Thu Oct 18 01:43:39 2012
       Checksum : 5d34dcac - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)

I'm still kinda confused. should /sda be the boot partition? I guess the solution is to somehow re-create the superblock and then re-assemble /md2.

Stil, any help is highly appreciated :)

n.r.
  • 249
  • 1
  • 2
  • 10
  • Updated Information: – n.r. May 24 '15 at 21:01
  • Bad News now, Buffalo support urged me to update the firmware, now I am not able to re-activate ssh-daemon, I'll have to get a desktop PC with e.g. Linux to try to repair them with maybe Testdisk – n.r. May 26 '15 at 18:05

1 Answers1

0

You have two drives joined in a raid0 stripe. One drive /dev/sda and raid partition /dev/sda6 looks fine. What happened to the second drive? I suspect the second drive is damaged. Does it show up if you run... fdisk -l

S.Haran
  • 101
  • 5
  • Thats the output of fdisk -l /dev/sda: Disk /dev/sda: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda 1 243202 1953514583+ ee EFI GPT – n.r. May 26 '15 at 18:03
  • What is the output of "fdisk -l" it will show all drives, you need to find the second drive, it has half your data, it could possibly be named /dev/sdb. – S.Haran May 26 '15 at 21:54