1

I have just setup a new SolusVM Xen Node at my DC. I asked them to install CentOS6 with Software RAID 10 and 100GB to root then the remainder to an LVM group for Xen. All in Software RAID 10. The server has 4x1Tb drives.

When I run cat /proc/mdstat I get this:

Personalities : [raid10] [raid1] 
md0 : active raid1 sdb1[1] sda1[0] sdc1[2](S) sdd1[3](S)
      255936 blocks super 1.0 [2/2] [UU]

md2 : active raid1 sdb3[1] sdc3[2](S) sdd3[3](S) sda3[0]
      4192192 blocks super 1.1 [2/2] [UU]

md1 : active raid10 sdc2[2](S) sdd2[3](S) sdb2[1] sda2[0]
      104791552 blocks super 1.1 2 near-copies [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Ok, so looks good. When I run vgdisplay though it tells me I have 3.23TB available, with RAID 10 I should only have 1.7 to 1.8 available at max on the LVM:

  --- Volume group ---
  VG Name               VolGroup
  System ID             
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               3.23 TiB
  PE Size               32.00 MiB
  Total PE              105888
  Alloc PE / Size       0 / 0   
  Free  PE / Size       105888 / 3.23 TiB

Their taking forever to respond to my ticket requesting what happened. Am I right is this a botched RAID 10 install or is this what it should be? Why if so?

jfreak53
  • 188
  • 1
  • 3
  • 25

1 Answers1

1

From the information you provided, my impression is that Volume group is created out of 4 partitions sd[abcd]2 and not from md1 raid array. You should run pvdisplay and/or pvs to confirm that.

If that's correct, the way to proceed would be to remove LV, VG and PVS from sd[abcd]2, rebuild RAID 10, and pvcreate /dev/md1; vgcreate ...

Jakov Sosic
  • 5,157
  • 3
  • 22
  • 33