3

I'm trying to to create a RAID 5 with 4x 2TB disks on Debian 6. I followed the instructions from: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm

I created the raid with following command: sudo mdadm --create --verbose /dev/md0 --auto=yes --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

After creating the RAID mdadm --detail /dev/md0 shows me:

/dev/md0:
    Version : 1.2
 Creation Time : Mon Jun 11 18:14:26 2012
 Raid Level : raid5
 Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
 Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
 Raid Devices : 4
 Total Devices : 4
 Persistence : Superblock is persistent

Update Time : Mon Jun 11 18:14:26 2012
      State : clean, degraded
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

       Name : rsserver:0  (local to host rsserver)
       UUID : a68c3c99:1ef865e9:5a8a7bdc:64710ed8
     Events : 0

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1
   2       8       49        2      active sync   /dev/sdd1
   3       0        0        3      removed

   4       8       65        -      spare   /dev/sde1

Why is there a spare drive? I didn't create one. I don't want to use a spare drive.

R.S.
  • 161
  • 1
  • 6
  • 3
    Refer to [What are the different widely used RAID levels and when should I consider them?](http://serverfault.com/questions/339128/what-are-the-different-widely-used-raid-levels-and-when-should-i-consider-them) for reasons why RAID 5 is not recommended for large volumes. – Skyhawk Jun 11 '12 at 16:32
  • 2
    Couldn't agree with Miles more - please don't use R5, not unless you're leaving your job very soon anyway. – Chopper3 Jun 11 '12 at 16:41
  • 1
    Check again after the resync has finished. `watch -n 60 cat /proc/mdstat` will be handy. – Charles Jun 11 '12 at 16:51
  • Well, I do understand the problems while building and rebuilding the raid. But what to do for such large drives? @Charles: Thanks, I'll check. – R.S. Jun 11 '12 at 16:53
  • 2
    RAID 10 or 6 are the good choices for large drives. – Zoredache Jun 11 '12 at 16:59
  • Or RAIDZ if you're not tied to Debian. – MDMarra Jun 11 '12 at 17:06
  • Originally I wanted to use FreeNAS and ZFS for my server. Unfortunately I can't switch because of some applications. How is the support of ZFS in Debian? – R.S. Jun 11 '12 at 18:01
  • Possible duplicate: http://serverfault.com/questions/43575/how-to-create-a-software-raid5-array-without-a-spare –  Nov 05 '12 at 08:57

2 Answers2

5

If you really want a Raid5 of your 4 drives (see comments above) you should be able to set the spare-device count to 0 with --spare-devices=0.

krissi
  • 3,317
  • 1
  • 18
  • 22
  • `--spare-devices` makes no difference in this case: see http://serverfault.com/questions/43575/how-to-create-a-software-raid5-array-without-a-spare –  Nov 05 '12 at 08:58
0

From man mdadm:

When creating a RAID5 array, mdadm will automatically create a degraded array with an extra spare drive. This is because building the spare into a degraded array is in general faster than resyncing the parity on a non-degraded, but not clean, array. This feature can be overridden with the --force option.

In other words, once the resync has finished, the spare will be added to the array as you intended, but if you prefer mdadm to build the 'slow' way, use --force.