RAID 5 degraded performance

2

I'm currently rebuilding my fileserver, but because of SATA port limitations, I can only use 5 disks at a time. I've remove a drive from my original RAID, so I've been able to create a new degraded RAID5 array.

I now have 2x 3x2To (-1 missing) degraded RAID5 arrays. New disk are WS Nas drives (4k optimized).

I've followed this guide in order to be 4k compliant:

...unless I'm not using LVM.

Write test for original degraded array (dd if=/dev/zero of=/mnt/data/out bs=1M count=10240):

90mb/s

New degraded array:

120mb/s

Unless this is slightly better numbers, I'm wondering how degraded state affect performance? The author from the guide measures 236mb/s with the same test (but not degraded array).

Before copying all my data, and switch back to a fully operational array, I'm wondering if 120mb/s could be a normal writing performance in my case?

Barium Scoorge

Posted 2013-09-08T12:51:42.390

Reputation: 131

Using external SATA connector, I've added another 2To drive to replace missing RAID5 drive. Writing performance is still @ 120mb/s. – Barium Scoorge – 2013-09-09T17:13:30.520

Answers

0

Finally, I've managed to reach a speed of 175MB/s using theses parameters :

mdadm --create --bitmap=internal --metadata 1.0 --verbose /dev/md1 --chunk=32 --level=5 --raid-devices=3 /dev/sde1 /dev/sdf1 missing
mkfs.ext4 -m 0 -b 4096 -E stripe-width=256,stride=128 /dev/md1

It's 95% faster than my old RAID5 setup. (The box is an AMD X2 250, GA880 mobo with 4gb)

Barium Scoorge

Posted 2013-09-08T12:51:42.390

Reputation: 131