6

I'm creating RAID10 array from 6 drives. When created in near layout, e.g.

mdadm --create /dev/md2 --chunk=64 --level=10 --raid-devices=6 --layout=n2 /dev/sda1 ...

Checking stripe size as reported by the system:

cat /sys/devices/virtual/block/md2/queue/optimal_io_size

Result is 196608, as expected, e.g. 3 data drives (50% of 6 total in RAID10) x 64K chunk = 192K stripe.

Now, when creating same array with --layout=f2 option, optimal_io_size reports 393216, e.g. twice as large.

Now, according to Nail Brown (mdadm raid10 author),

The "far" layout lays all the data out in a raid0 like arrangement over the first half of all drives, and then a second copy in a similar layout over the second half of all drives - making sure that all copies of a block are on different drives.

This would be expected to yield read performance which is similar to raid0 over the full number of drives, but write performance that is substantially poorer as there will be more seeking of the drive heads.

So it seems OS is suggesting I'd better use RAID0-like stripe size (across all disks in the array), and not the "traditional" RAID10 stripe size (across half disks in the array). This has potentially serious implications for LVM and filesystem alignment, stripe/stride tuning, etc. However, I've never seen anywhere suggestions to treat MDADM RAID10 in far mode as RAID0.

Question: Am I missing something here, or am I correct to treat RAID10,f2 as RAID0 when aligning/tuning whatever lays on top of that RAID?

haimg
  • 631
  • 7
  • 14

0 Answers0