5

At work I administer several machines using (real) hardware raid controllers (with battery backed write caches), and these have the nasty habit of falling back to writethrough behaviour when an array becomes degraded due to a disk failure.

I cannot think of any reason why, so I have configured these arrays to be forced into writeback behaviour while using the hotspare to rebuild the array, and all seems to be running well now.

Can anyone think of a reason why it would be a good idea to switch to writethrough while an array is running in degraded mode due to disk failure? (Of course, if the BBU itself fails, writethrough instead of writeback makes complete sense).

jap
  • 173
  • 5
  • Um, what type of server, controller, disks and their respective makes/models? – ewwhite Jan 14 '13 at 21:28
  • We're mostly working with Dell machines; anything from R320 to R910's and all models in-between, and MD1000/MD1200's for external storage. For raid controllers, we're using PERC5(e) cards on the oldies, and H310/H800's on the newest stuff. Mostly 7200rpm disks (and recently some cachecade SSDs), installed by Dell. – jap Jan 14 '13 at 21:55

1 Answers1

7

From a protection perspective, there is no additional data loss potential from having a writeback cache enabled during a rebuild operation.

Some controllers disable the writeback cache because they don't have enough processor overhead to be able to manage the writeback cache and a rebuild at the same time. Or their firmware is not sophisticated enough to be able to handle both.

There are controllers out there that can do a rebuild while the writeback cache is enabled. You appear to not have one of those.

longneck
  • 22,793
  • 4
  • 50
  • 84
  • My reason for asking is the performance degradation caused by shifting to writethrough mode. And yes, I do hit those disk hard enough that it's noticable. – jap Jan 14 '13 at 21:52