4

Let's say I've got six identical drives and I'm going to use them all in a RAID10,f2 array constructed using mdadm. I've always put a single partition on each disk and constructed the array from /dev/sd[bcdefg]1 rather than the whole disk. But, I'm wondering if that's the best thing to do with a modern kernel and mdadm.

kbyrd
  • 3,604
  • 2
  • 23
  • 34

3 Answers3

2

I don't think there is a big difference either way. But I would generally do whole disk, to keep the configuration simple.

Antonius Bloch
  • 4,480
  • 6
  • 28
  • 41
  • Also it's less work to partition the raid device as opposed to partitioning 6 separate disks. – Antonius Bloch Feb 04 '11 at 17:45
  • 2
    I read a good piece of advice to not do whole disks: people who aren't experienced or aren't familiar with the system might get confused when they "fdisk -l" and see no partition table. This might lead them to think the devices aren't in use. That's a good enough tie-breaker for me... – pboin Feb 04 '11 at 18:17
  • @pboin this is a fair point, but I find it hard to decided if I prefer to run a neater server or once that "lesser" admins can understand! – Coops Feb 04 '11 at 20:30
  • If I worked alone, I'd do devices in a minute. But, well... I don't. – pboin Feb 04 '11 at 21:21
1

The way you're doing it (one large partition that you create the mdadm array from) there's no major difference, but since you're effectively using the whole disk anyway I'd do as Antonius Bloch suggested and use the whole disk device rather than creating a partition -- it just seems more correct to me to create your RAID using the full physical device rather than a chunk of it.

If you are creating multiple partitions and setting up mdadm volumes across those you may actually experience a performance decrease (if you split your disks in half and one array is the first half of a set of disks and the other array is the second half your drives will have to seek back and forth when reading/writing on both disks -- head travel time will kill your performance), but the solution there is not to do that :-)

voretaq7
  • 79,345
  • 17
  • 128
  • 213
1

If you have a small setup and swap is going on these drives, you may want to keep swap separate, since it can do its own round-robining between devices.

Or, you may need to have /boot separate (without LVM), but want LVM for the rest of the disk. This is relatively a relatively common thing if you're trying to mirror system drives. (And while you're doing that, since disks are so gigantic these days and way too big for just the OS, you might choose to have only a portion of the disk be mirrored and make the rest non-mirrored scratch space.)

mattdm
  • 6,550
  • 1
  • 25
  • 48
  • That's a bad idea - if any one of those disks die your system might crash if there are pages swapped out. – Petr Apr 21 '22 at 16:30
  • *Checks date of post... wow. Anyway...* Assuming you're referring to the part about swap, I'd call that a "possible disadvantage" not categorically a "bad idea". There are plenty of situations where the consequences of that unlikely circumstance are outweighed by the benefits. – mattdm Apr 22 '22 at 01:39