My RAID1 array /dev/md1 is rebuilding after one of the disks has been replaced.
Problem : the source disk has Unrecoverable Errors, and my only choice if I do not want to lose the whole data set (no backup, no excuse) is to patiently write to the faulty sectors with hdparm --write-sector 0123456789 --yes-i-know-what-i-am-doing /dev/sde
(my source disk) so that the process can keep on.
I know that some of my files are going to be corrupted because I'm writing zeroes in some of the sectors they are stored in.
Now I need to identify these files with debugfs
and treat them accordingly.
My volume layout is as follows :
Relevant possibly corrupted file is "here" --+ ... but what is its inode ? | v +-----------------------------------------------+ | Ext4 filesystem | +-----------------------------------------------+ | LVM LV | +------------------------+----------------------+ | LVM PV | LVM PV | +------------------------+----------------------+ | /dev/md127 | /dev/md1 | | | | |<- 1953524992 sectors ->|<-1953522848 sectors->| +-----------+------------+-----------+----------+ | /dev/sdd | /dev/sdc | /dev/sdb | /dev/sde | +-----------+------------+-----------+----------+ ^ | Problematic sector 1697876848 on /dev/sde ---+
So far, I "blanked out" sectors 1697876848
, 1524606517
, 1524609475
, etc. on /dev/sde and restarted recovery each time to let it finish.
Considering the different offsets (RAID + LVM), how can I calculate the inodes and identify the affected files ?