We have an server PC with "intel matrix storage manager" raid (or maybe 'fake' raid) controller. Firstly there were one RAID-1 with two equal seagate disks of 1TB size. We've installed ubuntu server there, partitioned disk with standard partitions. Then we ran out of space and bought another two equal seagate disks (of different model) for second RAID-1. On a new disks we've decided to use LVM.
Everything was fine until this week, when both disks in a first RAID-1 started failing read/write operations. S.M.A.R.T. tests failed for both of them. We've rebooted the server, it started fine. However, we started to move sensitive data to second (we hoped good) RAID-1. For backups couple of new logical partitions were created. At some time it failed input/output operation again and we had to reboot server PC. Then we've decided not to use faulty drive anymore and just start with ubuntu live CD. Then package lvm2
was installed right on running live CD ubuntu, then standard pvscan
, vgscan
discovered only ONE logical partition instead of SIX. After googling around we've found the lvm backup config file and done vgcfgrestore
so all logical partitions become visible. Unfortunately, all other partitions became un-mountable. Mount says:
mount: you must specify the filesystem type
or
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg-backup_opt,
missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
dmesg | tail
[96233.605251] EXT3-fs (dm-3): error: can't find ext3 filesystem on dev dm-3.
[96233.664882] EXT4-fs (dm-3): VFS: Can't find ext4 filesystem
[96233.784763] EXT2-fs (dm-3): error: can't find an ext2 filesystem on dev dm-3
Then we've tried several things such as mke2fs -S
, testdisk
, sleuthkit
. Nothing helps.
The most disappointing thing is that among partitions that have been lost only three were really new, others were created about a month ago.
Can't imagine what can be done now. Please help.