2

Possible Duplicate:
What are the different widely used RAID levels and when should I consider them?

My team at work uses a pair of servers each with a 2.7TB RAID 10 array composed of 8x 750GB hard drives. They are set up in different physical locations; one is kept purely as a disaster recovery / software testing environment, and the other handles all our modelling and simulation workload.

We've got to the point where we're regularly running out of space, and we're exploring our options for adding more. Only about 10 people regularly use the server, and usually no more than 4-5 are working on it at any one time. The workload is read rather than write-heavy; our larger database files tend to be read many times but only written to when first created.

The host has asked our team what we'd like to do about this. None of us has any practical experience with this sort of thing, so I thought I'd post here.

The cheapest option seems to be to take a full backup of the production server, rebuild the raid array as RAID 5/6 and then restore the backup. If I understand correctly, this should give us 50-75% extra capacity. I'm not sure whether we currently use hot spares.

Question time:

  1. Assuming that we have a decent hardware RAID controller, what sort of change to read and write speeds should we expect to see if we switch our 8 drives from RAID 10 over to RAID 5/6? Anything dramatic?
  2. Am I correct in thinking that RAID 5 on each server is sufficient to avoid any major inconvenience or total loss of data, given that we have a backup, or should we definitely go for RAID 6?
user3490
  • 186
  • 1
  • 9
  • 3
    RAID 5 with high capacity drives is a bad idea. Don't do that. – Bart Silverstrim Feb 03 '12 at 22:51
  • consider using a SSD to store hot data, you can achieve wonder with an SSD and ZFS!!! you'll find you get much better performance. Also never under estimate a RASSD (redundant array of solid state disks). Also make sure you got enough network bandwidth to handle increases speeds! http://www.nerdblog.com/2010/03/zfs-nas-followup-ssd-is-amazing.html – The Unix Janitor Feb 04 '12 at 00:17
  • @BartSilverstrim - thanks, I had a look at the related questions before posting but I didn't see that one. Perhaps more people should upvote it to make it more prominent? – user3490 Feb 04 '12 at 10:46

2 Answers2

4

Oh wow, we get asked this stuff EVERY DAY OF THE WEEK and it always comes down to this - use RAID 6 or 10, nothing else. Use 10 where you care about performance and use 6 when you don't care or can't afford it. There is no third option, either's good, just avoid everything else, especially 5 for dull math reasons ok.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • 2
    It never ceases to amaze me the number of people who still consider RAID 5 as a viable option, specially with the increase in array capacity. – Luis Ventura Feb 04 '12 at 07:26
  • @Chopper3 That covers part 2 of the question, but what about part 1? I appreciate that due to varying hardware performance it's unlikely that any estimate would be a perfect match for our particular server, but I'd still like to get a general idea of what other people have seen when doing something similar. – user3490 Feb 04 '12 at 10:54
  • @LuisVentura It's interesting to see this strongly anti-RAID 5 sentiment, as our host didn't mention RAID 6 as an option at all. A bit more reading turned up this: http://www.miracleas.com/BAARF/ – user3490 Feb 04 '12 at 11:04
  • @user3490 for such a small array of such slow drives you'll only see ~10-20% slowdown on reads and maybe ~20-30% on writes by moving from 10 to 6. – Chopper3 Feb 04 '12 at 11:52
  • @Chopper3 Thank you, I've now marked this as accepted. – user3490 Feb 06 '12 at 18:34
1

RAID 5 should not be used if RAID 6 is available. This is because with large arrays of high capacity drives there is a non-negligible chance of at least one additional failure occurring during a rebuild after the first drive fails. If this happens with RAID 5, the whole array is lost.

RAID 6 is the better choice when capacity is more important than speed and reads are more frequent than writes. However, it would be a good idea to maintain a separate smaller RAID 10 volume for temporary files and ad-hoc work, preferably using a separate set of drives. This can be done either by installing more drives or by allocating a small chunk of each of the 8 existing drives to a RAID 1+0 volume and putting the remainder of each towards a RAID 6 volume. The 1+0 volume will be somewhat faster if it occupies the first partition on each of the 8 drives, as read and write speeds tend to be faster towards the edge of the disks.

user3490
  • 186
  • 1
  • 9