7

In a PowerEdge 1900 have I a hardware RAID 5 of 5x250GB disks + 1 hot spare 250GB.

Disk 2 have errors, so I forced it offline in the raid bios to replace it.

Now it have been rebuilding to the hot spare for 4 hours, and it is not finished yet. That is less than 1GB per minute.

What is to be expected time wise?

What could the reason be why it takes so long?

Sandra
  • 9,973
  • 37
  • 104
  • 160
  • 1
    Depends on the speed and type of the disks in question and the raid controller too. – Nate Sep 23 '11 at 21:44
  • 2
    Another factor yet unmentioned by anyone here is that the PERC raid controllers (which are re-branded LSI MegaRaid controllers BTW) do have settings for rebuild rate - with the default settings no more than 30% of the disk's time is used for the rebuild. This is configurable via the storage manager. – the-wabbit Sep 23 '11 at 21:53

2 Answers2

8

Depending on the load for the server, this is entirely normal. For every read or write that is still happening, the RAID card has to calculate the parity for the missing data to recreate it. Couple that with the intense load of a rebuild and the age of a 1900 and you've got a relatively slow rebuild. That's why I almost always recommend RAID 10 when possible.

MDMarra
  • 100,183
  • 32
  • 195
  • 326
  • Because a rebuild of RAID 10 would only be 250GB of data copied? – Sandra Sep 23 '11 at 21:51
  • 2
    @Sandra Because no parity data needs to be calculated and only the amount of data that's on the corresponding mirror needs to be copied. – MDMarra Sep 23 '11 at 22:23
3

In order to rebuild the array, it'll have to read not just the 250GB of the missing drive, but the entire 1TB size of the full array. Such rebuilds can take quite some time.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296