13

I use my Ubuntu machine as a file server for Windows/Linux/Mac clients using a Samba share. I need it to be easily expandable by just adding more hard drives without having to move any data back and forth.

This is how I have done it so far. I have successfully added a fourth hard drive. Now it would be nice to know is this how it should be done? What I am doing wrong or what I could do better?

Creating the initial 3 drive array

I started with three empty drives: /dev/sdb, /dev/sdc and /dev/sdd.

First I created empty partitions to all drives:

$ fdisk /dev/sdX
n # Create a new partition
p # Primary
1 # First partition
[enter] # Starting point to first sector (default)
[enter] # Ending point to last sector (default)
t # Change partition type
fd # Type: Linux raid autodetect
w # Write changes to disc

When empty RAID partitions have been created to all three discs, I created a RAID5 array:

$ mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

Now the RAID5 array is created, and it is being built already. It takes time, but you can proceed with creating a new physical LVM2 volume:

$ pvcreate /dev/md0

Now let's create a new volume group:

$ vgcreate vd_raid /dev/md0

Then we need to create a new logical volume inside that volume group. First we need to figure out the exact size of the created volume group:

$ vgdisplay vg_raid

The size can be seen from the row which indicates the "Total PE" in physical extents. Let's imagine it is 509. Now create a new logical volume which takes all available space:

$ lvcreate -l 509 vg_raid -n lv_raid

Finally we can create a file system on top of that logical volume:

$ mkfs.xfs /dev/mapper/vg_raid-lv_raid

To be able to use our newly created RAID array, we need to create a directory and mount it:

$ mkdir /raid
$ mount /dev/mapper/vg_raid-lv_raid /raid

Now it is ready to use. But for it to automatically mount after reboot, we need to save RAID geometry to mdadm's configuration file:

$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Then add the following line to /etc/fstab which mounts the RAID array automatically:

/dev/mapper/vg_raid-lv_raid /raid auto auto,noatime,nodiratime,logbufs=8 0 1

Now the RAID array is ready to use, and mounted automatically to /raid directory after every boot.

Adding a new drive to the array

Let's imagine that now you have a new drive, /dev/sde, which you want to add to the previously created array without losing any data.

First the new drive needs to be partitioned as all the other drives:

$ fdisk /dev/sde
n # Create a new partition
p # Primary
1 # First partition
[enter] # Starting point to first sector (default)
[enter] # Ending point to last sector (default)
t # Change partition type
fd # Type: Linux raid autodetect
w # Write changes to disc

Then it needs to be added to the RAID array:

$ mdadm --add /dev/md0 /dev/sde1

Now the RAID5 array includes four drives, which only three are in use currently. The array needs to be expanded to include all four drives:

$ mdadm --grow /dev/md0 --raid-devices=4

Then the physical LVM2 volume needs to be expanded:

$ pvresize /dev/md0

Now the physical volume is resized by default to cover all available space in the RAID array. We need to find out the new size in physical extents:

$ vgdisplay vg_raid

Let's imagine that the new size is now 764 (can be seen from "Total PE"). Now expand the logical volume to cover this:

$ lvextend /dev/mapper/vg_raid-lv_raid -l 764

Then expand the XFS file system. This needs to be done during the file system is online and mounted:

$ xfs_grow /raid

By default it is expanded to cover all available space. Finally the RAID array geometry needs to be updated because the array now includes a new disk. First delete the added line from /etc/mdadm/mdadm.conf and then add a new one:

$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Taskinen
  • 200
  • 1
  • 1
  • 11
  • 3
    Don't put partitions on your disks. No need for it - the in-kernel RAID autodetect (partition type fd) is deprecated. – James Apr 20 '10 at 20:55
  • So instead of creating type 'fd' partitions with fdisk, I should just create the /dev/md0 array to /dev/sdb, /dev/sdc and /dev/sdd devices directly? – Taskinen Apr 24 '10 at 19:23
  • 1
    I have heard that not all disks are same size, so if I buy a new terabyte disk, it might not be exactly the same size. Would that introduce some problems? – Taskinen Apr 24 '10 at 19:58

1 Answers1

5

I think you've got it right. Make sure you understand and heed the warnings regarding growing RAID 5 in man 8 mdadm.

Personally if I were growing an LVM volume, I would not be growing an existing RAID array to do it. I'd create another RAID array, create a new physvol from it, and add it to the same volume group. This is a much safer operation (doesn't involve rewriting the whole RAID5 array across the new set of disks) and keeps the size of your arrays down.

Kamil Kisiel
  • 11,946
  • 7
  • 46
  • 68
  • Absolutely agree. vgextend is your friend here. – Dan Andreatta Feb 19 '10 at 12:36
  • 1
    In general I understand, but what about the situation where I want to grow the above mentioned three disk array into a four disk array. I can't create a new RAID array from the fourth disk alone. – Taskinen Apr 24 '10 at 19:25
  • 1
    I wouldn't be expanding a storage server's disk array one disk at a time. Going from a three disk array to a four disk array will give you only 50% more storage, because you have to use the same size disks. – Kamil Kisiel Apr 25 '10 at 15:55
  • 1
    Agreed. By the time you run out of space, bigger disk drives will have come down in price. Build a second RAID array on a new set of bigger drives, then pvmove your old data to that and decommission the old set once the pvmove is done. This can all be done while the filesystems in the logical volumes affected by pvmove are in active use. – flabdablet Aug 14 '12 at 13:59