4

I created a RAID10 by adding two RAID1 md devices as physical volumes to a volume group. Unfortunately it looks like I forgot to specify the number of stripes when I created the logical volumes (it was late):

PV         VG     Fmt  Attr PSize   PFree  
/dev/md312 volume lvm2 a-   927.01G 291.01G
/dev/md334 volume lvm2 a-   927.01G 927.01G

I know that I can move all the data of a logical volume from one physical volume to another with pvmove. It also looks like lvextend supports an -i switch to change the number of stripes. Is there any way to combine these two, ie. change the number of stripes and "rebalance" the data over the stripes based on the allocation policy?

According to this mail by Ross Walker from March 2010 it isn't possible but maybe this has changed since then.

mss
  • 435
  • 1
  • 6
  • 16
  • I'm guessing you're dealing with Linux but you may want to include some more OS specifics so people can be sure. – EightBitTony Jun 30 '11 at 15:10
  • And please expand some of your acronyms for brevity and sanity – thinice Jun 30 '11 at 15:17
  • @EightBitTony I tagged the question with linux and lvm, so I assumed I didn't have to repeat this info in the question itself. I cleared this up. – mss Jun 30 '11 at 15:30
  • @rovangju I expanded all the LVM specific acronyms. – mss Jun 30 '11 at 15:31
  • It may not be possible using LVM. It is possible using mdadm. See answer below. – Nils Aug 17 '11 at 20:33
  • Yes, it looks like it isn't possible (yet?). So the correct answer as of now should be "No". I already did it similar to how you described it, just with a downtime. – mss Aug 19 '11 at 10:08

1 Answers1

2

pvmove is very slow. You will be propably faster if you recreate your layout during a small downtime.

If no downtime is possible I would recreate md334 as striped mirror with degraded raid1 disks as underlying disks (i.e. use md for Raid 10 - not LVM). Then do your pvmove to md334, get rid of md312, wipe their disks md-signatures and add the then free two disks to your two degraded raid1s (and come back to full redundancy).

I am not sure if you can stack md-devices, but I see no reason why that should not be possible. During the pvmove you won`t have redundancy.

Update 2011-08-17: I just tested the procedure with CentOS 5.6 - it works. Here are the results:

cat /proc/mdstat

Personalities : [raid1] [raid0]

md10 : active raid0 md3[1] md1[0] 1792 blocks 64k chunks

md3 : active raid1 loop0[1] loop1[0] 960 blocks [2/2] [UU]

md1 : active raid1 loop2[1] loop3[0] 960 blocks [2/2] [UU]

To simulate your setup I first setup /dev/md0 with a mirror consisting of loop0 and loop2. I setup an VG with md0 as disk. Then I createad an LV within that VG, create a filesystem in the LV and mounted it, wrote some file to it.

Then I setup /dev/md1 and md3 as degraded raid1 devices consisting of loop1 resp. loop3. After that I created a raid10 device my building a raid0 out of md1 and md3.

I added md10 to the VG. Then pvmove md0 to md10. Removed md0 from the VG. Stopped md0, wiped loop0 and loop2. Resized the degraded raid1 to they could use two devices. Hot added loop0 to md3 and loop2 to md1.

The filesystem was still mounted throughout the whole process.

Nils
  • 7,657
  • 3
  • 31
  • 71