2
1
See the nubmers below for /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1:
$ iostat -dxm && cat /proc/mdstat
Linux 3.5.0-17-generic (avarice) 12-12-29 _x86_64_ (2 CPU)
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 4.29 3.97 5.48 6.55 0.25 0.08 55.17 0.42 34.63 63.87 10.18 4.55 5.47
sdb 2717.50 0.00 54.54 0.01 10.83 0.00 406.53 0.14 2.53 2.53 6.53 1.63 8.92
sdc 1390.51 0.00 11.67 0.01 5.48 0.00 960.60 0.04 3.00 3.00 5.47 1.57 1.83
sdd 1390.49 0.00 11.50 0.01 5.48 0.00 974.54 0.04 3.06 3.06 6.13 1.61 1.85
sde 0.10 1390.35 0.44 11.03 0.00 5.47 977.75 0.03 2.68 0.31 2.77 2.57 2.95
md0 0.00 0.00 0.05 0.00 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sde1[4] sdd1[2] sdc1[1] sdb1[0]
5860145664 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.5% (10476048/1953381888) finish=501.1min speed=64609K/sec
The RAID rebuild is running at 64MB/s. iostat
says that the associated drives are reading/writing at about 5MB/s. Is iostat
wrong?