4

So, I had disk failure and was moving LVs from the failing disk, to the new PVs. Some LVs were moved successfully, some not. Afterwards I ended up with following state: - two locked LVs - volume group with missing PV

When I try to remove PV, I get:

vgreduce --removemissing --force vg3
  Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.
  Removing partial LV var.
  Can't remove locked LV var

lvremove -fff vg3/var
  Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.
  Can't remove locked LV var

pvmove --abort
  Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.
  Cannot change VG vg3 while PVs are missing.
  Consider vgreduce --removemissing.
  Skipping volume group vg3

I also tried executing vcfgbackup and then restore after editing the locks out, but to no avail:

vgcfgrestore --force vg3
  Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.
  Cannot restore Volume Group vg3 with 1 PVs marked as missing.
  Restore failed.

So I went even further, and inserted the disk back - its failed, but it is detectabled for a bit.

vgreduce --removemissing vg3
  /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error
  /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error
  WARNING: Partial LV var needs to be repaired or removed.
  WARNING: Partial LV pvmove1 needs to be repaired or removed.
  There are still partial LVs in VG vg3.
  To remove them unconditionally use: vgreduce --removemissing --force.
  Proceeding to remove empty missing PVs.

lvremove -fff vg3/var
  /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error
  /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error
  Can't remove locked LV var

pvmove --abort
  /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error
  /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error
  Cannot change VG vg3 while PVs are missing.
  Consider vgreduce --removemissing.
  Skipping volume group vg3

And this is the moment, at which I am out of ideas.

macronus
  • 41
  • 1
  • 3
  • Try to use TestDisk to recover your LVM-s. This tool can detect your LVM structure, so you can dump it via dd. – Maxiko Jul 06 '16 at 10:47
  • Issue is, that I am unable to unlock the LVMs even when they are visible. Also - it didn't see the two volumes that LVM is doing fuss about. – macronus Jul 07 '16 at 13:51

1 Answers1

1

Similar to Can't remove volume group, solved this problem by creating a temporary pv with the same uuid:

UUID="RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP"  # from question
dd if=/dev/zero of=/tmp/tmp.raw bs=1M count=100 
losetup -f
losetup /dev/loop0 /tmp/tmp.raw
pvcreate --norestorefile -u $UUID /dev/loop0   # it has arisen!
killall lvmetad      # so it stops complaining about duplicate uuids
pvremove /dev/loop0  # a clean removal 
losetup -D
pvscan --cache       # to restart lvmetad

Season with vgreduce etc. if needed.

fche
  • 291
  • 2
  • 5
  • 1
    I believe that "losetup /dev/loop0 tmp.raw " should be "losetup /dev/loop0 /tmp/tmp.raw". Not that that isn't obvious, except perhaps to another noob like me. Also, this didn't work for me, got "Couldn't find device with uuid xxx", which, of course, aint surprising. Tried tune2fs to just assign the UUID, but got "Bad magic number in super-block while trying to open /dev/sda". I presume this is because the disk is raw. Guess I need to Google a bit more. – codenoob Aug 01 '21 at 00:45
  • Thanks, fixed the /tmp part. The UUID is the one copied from the original error message. It's not one that will work on anyone else's particular system. – fche Aug 02 '21 at 03:07