When you expand a volume group over more than one physical volume you have a JBOD array, so if any of the physical volumes are removed (manually or through damage/degradation) you have little hop of getting your data back (actually, you are likely to be able to get some back from the remaining working physical device, but it is far from guaranteed). Treat a multi-drive volume group as you would a RAID0 volume (unless the individual physical volumes are RAID arrays (R1 or higher) themselves you are in a similar position with regard to data safety, so make sure everything non-volatile is part of a tested backup plan).
With regard to cleanly removing a physical volume from a volume group, that is possible (and not difficult).
- Reduce the size of your filesystems (with
resize2fs
for ext2/3/4 volumes) and reduce the logical volumes they exist on (with lvresize
), or just remove excess logical volumes, so that you have free space on the volume group at least as large as the device you wish to remove (use vgs
or similar to check this).
- Once that is done use
pvmove
to move any data in extents on the device to be removed onto other devices in the volume group (for instance pvmove -v /dev/hdc1
, note that this may take some time).
- When that is complete you can remove the physical volume from the group (with
vgreduce
), remove the LVM metadata from the physical volume to avoid potential confusion later (pvremove
).
The main thing that you can do wrong in the above process is reducing the logical volumes too much (making it smaller than you resized the filesystem to using resize2fs) which will later lead to errors and corruption. A common way to make this mistake is to get confused between 1000-based units and 1024-based ones, so be careful with those. If you are only removing logical volumes rather than resizing any, or you have enough unallocated space in the volume group already without doing either, you don't need to worry about this.