9

I've created a RAID10 array using 4 75G drives, to create a storage of 150G.
After everything finished (including the initial syncing), everything looked good except the output of df -h which showed only 73G storage on the designated mount point.

Details:

  • The machine is an m1.large Ubuntu 11.10 instance on Amazon EC2.
  • The 4 drives are EBS drives, each is 75G in size.
  • The RAID10 array was created using the following script:

-

#!/bin/sh

disk1="/dev/sdh1"
disk2="/dev/sdh2"
disk3="/dev/sdh3"
disk4="/dev/sdh4"

echo "*** Verifying existence of 4 volumes $disk1, $disk2, $disk3 and $disk4"
if [ -b "$disk1" -a -b "$disk2" -a -b "$disk3" -a -b "$disk4" ]; then
    echo "# Found expected block devices."
else
    echo "!!! Did not find expected block devices.  Error."
    exit -1
fi
until read -p "??? - How big (in GB) are the disks (They should be the same size)?  " disk_size && [ $disk_size ]; do
    echo "Please enter a disk size."
done 

lv_size=$(echo "scale=2; $disk_size * 2.0" | bc)
echo "*** Assuming a per disk size of $disk_size gigs, will create a logical volume of $lv_size gigs, with $lv_size reserved for snapshots"

echo "*** Partitioning disks..."

echo "~ Partitioning $disk1"
echo ',,L' | sfdisk $disk1
echo "~ Partitioning $disk2"
echo ',,L' | sfdisk $disk2
echo "~ Partitioning $disk3"
echo ',,L' | sfdisk $disk3
echo "~ Partitioning $disk4"
echo ',,L' | sfdisk $disk4

sleep 6
echo "*** Creating /dev/md0 as a RAID 10"

/sbin/mdadm /dev/md0 --create --level=10 --raid-devices=4 $disk1 $disk2 $disk3 $disk4 

echo " ~ Allocating /dev/md0 as a physical volume."

/sbin/pvcreate /dev/md0

echo " ~ Allocating a Volume Group 'mongodb_vg'"

/sbin/vgcreate -s 64M mongodb_vg /dev/md0

echo " ~ Creating a Logical Volume 'mongodb_lv'"

num_extents=$(echo "$disk_size * 1000 / 64" | bc)

/sbin/lvcreate -l $num_extents -nmongodb_lv mongodb_vg

echo " ~ Formatting the new volume (/dev/mongodb_vg/mongodb_lv) with EXT4"

/sbin/mkfs.ext4 /dev/mongodb_vg/mongodb_lv

echo " ~ Done! Go ahead and mount the new filesystem.  Suggested FStab: "
echo " /dev/mongodb_vg/mongodb_lv /data ext4 defaults,noatime 0 0"

This is the output I got:

*** Verifying existence of 4 volumes /dev/xvdh1, /dev/xvdh2, /dev/xvdh3 and /dev/xvdh4
# Found expected block devices.
??? - How big (in GB) are the disks (They should be the same size)?  75
*** Assuming a per disk size of 75 gigs, will create a logical volume of 150.0 gigs, with 150.0 reserved for snapshots
*** Partitioning disks...
~ Partitioning /dev/xvdh1
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/xvdh1: 9790 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/xvdh1: unrecognized partition table type
Old situation:   
No partitions found
New situation:   
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/xvdh1p1          0+   9789    9790-  78638174+  83  Linux
/dev/xvdh1p2          0       -       0          0    0  Empty
/dev/xvdh1p3          0       -       0          0    0  Empty
/dev/xvdh1p4          0       -       0          0    0  Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)  
~ Partitioning /dev/xvdh2
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/xvdh2: 9790 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/xvdh2: unrecognized partition table type
   Old situation:   
No partitions found
New situation:   
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/xvdh2p1          0+   9789    9790-  78638174+  83  Linux
/dev/xvdh2p2          0       -       0          0    0  Empty
/dev/xvdh2p3          0       -       0          0    0  Empty
/dev/xvdh2p4          0       -       0          0    0  Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)  
~ Partitioning /dev/xvdh3
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/xvdh3: 9790 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/xvdh3: unrecognized partition table type
Old situation:   
No partitions found
New situation:   
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/xvdh3p1          0+   9789    9790-  78638174+  83  Linux
/dev/xvdh3p2          0       -       0          0    0  Empty
/dev/xvdh3p3          0       -       0          0    0  Empty
/dev/xvdh3p4          0       -       0          0    0  Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)  
~ Partitioning /dev/xvdh4
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/xvdh4: 9790 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/xvdh4: unrecognized partition table type
Old situation:   
No partitions found
New situation:   
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/xvdh4p1          0+   9789    9790-  78638174+  83  Linux
/dev/xvdh4p2          0       -       0          0    0  Empty
/dev/xvdh4p3          0       -       0          0    0  Empty
/dev/xvdh4p4          0       -       0          0    0  Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)  
*** Creating /dev/md0 as a RAID 10
mdadm: partition table exists on /dev/xvdh1 but will be lost or
       meaningless after creating array
mdadm: partition table exists on /dev/xvdh2 but will be lost or
       meaningless after creating array
mdadm: partition table exists on /dev/xvdh3 but will be lost or
       meaningless after creating array
mdadm: partition table exists on /dev/xvdh4 but will be lost or
       meaningless after creating array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
 ~ Allocating /dev/md0 as a physical volume.

  Physical volume "/dev/md0" successfully created
 ~ Allocating a Volume Group 'mongodb_vg'
  Volume group "mongodb_vg" successfully created
 ~ Creating a Logical Volume 'mongodb_lv'
  Logical volume "mongodb_lv" created
 ~ Formatting the new volume (/dev/mongodb_vg/mongodb_lv) with EXT4
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux   
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
4800512 inodes, 19185664 blocks
959283 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
586 block groups 
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
 ~ Done! Go ahead and mount the new filesystem.  Suggested FStab:
 /dev/mongodb_vg/mongodb_lv /data ext4 defaults,noatime 0 0

This is the relevant output of df -h:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/mongodb_vg-mongodb_lv
                       73G  180M   69G   1% /ebsRaid

This is the output of mdadm --detail /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Wed Feb 29 10:14:39 2012
     Raid Level : raid10
     Array Size : 157283328 (150.00 GiB 161.06 GB)
  Used Dev Size : 78641664 (75.00 GiB 80.53 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Feb 29 13:21:49 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : my.site.com:0  (local to host my.site.com)
           UUID : CENSORED
         Events : 19

    Number   Major   Minor   RaidDevice State
       0     202      113        0      active sync   /dev/xvdh1
       1     202      114        1      active sync   /dev/xvdh2
       2     202      115        2      active sync   /dev/xvdh3
       3     202      116        3      active sync   /dev/xvdh4

This is the output of cat /proc/mdstat:

Personalities : [raid10] 
md0 : active raid10 xvdh4[3] xvdh3[2] xvdh2[1] xvdh1[0]
      157283328 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>

EDIT1:

This is the output of lvdisplay -m:

  --- Logical volume ---
  LV Name                /dev/mongodb_vg/mongodb_lv
  VG Name                mongodb_vg
  LV UUID                SEpGth-cXd3-ZFhy-XLHo-T5pV-gEd1-Tgancs
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                73.19 GiB
  Current LE             1171
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           252:0

  --- Segments ---
  Logical extent 0 to 1170:
    Type                linear
    Physical volume     /dev/md0
    Physical extents    0 to 1170

EDIT 2:

This is the output of vgdisplay:

  --- Volume group ---
  VG Name               mongodb_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               149.94 GiB
  PE Size               64.00 MiB
  Total PE              2399
  Alloc PE / Size       1171 / 73.19 GiB
  Free  PE / Size       1228 / 76.75 GiB
  VG UUID               CENSORED
Doron
  • 543
  • 1
  • 6
  • 14
  • A more KISS compliant approach, unless your work is based on highly detailed RAID 1+0 performance considerations, would be to create two distinct mirrors and join them on the LVM layer. – unixtippse Feb 29 '12 at 14:48
  • Can you do a `lvdisply -m` and post please. – webtoe Feb 29 '12 at 15:09
  • By the way I believe the script is not determining the number of extents correctly. IMHO, it should multiply $disk_size by 1024 and not 1000. – Luis Fernando Alen Feb 29 '12 at 15:11
  • @webtoe - I added it to the question. – Doron Feb 29 '12 at 15:28
  • @LuisFernandoAlen are you sure ? – Doron Feb 29 '12 at 15:28
  • 3
    I can't answer your question, but it's refreshing to see something like this posted with relevant details and outputs. Stick with us, fine sir. – MDMarra Feb 29 '12 at 15:30
  • Apologies, I should have asked you to also show `vgdisplay` in my last comment. It will show how many Physical Extents there are and how many are free etc. – webtoe Feb 29 '12 at 15:31
  • @webtoe, I've added it to the question – Doron Feb 29 '12 at 15:45
  • Yes, I'm sure, @Doron. The size of your physical extent is given in Megabytes (64M), then the script divides the total disk size in megabytes by the size of your physical extent size to get the number of extents. num_extents=$(echo "$disk_size * 1000 / 64" | bc) So, in order to get the exact disk size in megabytes you must multiply the variable by 1024 and not by 1000 since 1G=1024M. – Luis Fernando Alen Feb 29 '12 at 21:07

1 Answers1

6

Your Volume Group isn't using the entirety of the extents created for it:

VG Size               149.94 GiB
PE Size               64.00 MiB
Total PE              2399
Alloc PE / Size       1171 / 73.19 GiB
Free  PE / Size       1228 / 76.75 GiB

You can add more extents using the following command:

lvextend -l +100%FREE /dev/mongodb_vg/mongodb_lv /dev/md0

PLEASE READ THE MAN PAGE BEFORE TYPING THIS IN

This command would extend the volume group to use all the FREE extents that are left (you can also select less if you want to keep some extents free). It will utilise the extents on md0 to do this.

You then can resize the partition online using:

resize2fs  /dev/mongodb_vg/mongodb_lv

And it should say it is resizing online. I believe this will solve your issue but please read the man pages and understand what they are doing before trying. I'm not responsible for you trashing your disks.

As an aside, RAID on an EBS and then lvm on top seems like an unnecessary level of virtualising the disks. You won't enhance performance nor safety of the data by adding extra RAID in. LVM does mirroring/striping already if I remember correctly. Although you technically can run LVM on RAID on LVM on RAID ad infinitum I'm not sure you are gaining much by doing so (though I'm more than happy to be pointed out to be wrong on this).

webtoe
  • 1,946
  • 11
  • 12
  • Thanks. The script I showed above, is taken from a post of a 10gen employee (the company behind mongodb) - http://groups.google.com/group/mongodb-user/msg/4d74b8283e4da79d - regarding mongodb on ebs. I'm not saying mongodb or 10gen are linux/system/ebs experts, but it sounded like a solid and well explained suggestion. Since I haven't done anything else yet on that machine, I am open to (and will highly accept and thank for) other/better solutions. This is the first time I actually did anything with any raid setup so I don't quite understand LVM setups. – Doron Feb 29 '12 at 17:06