55

I have always used hardware based RAID because it (IMHO) is on the right level (feel free to dispute this), and because OS failures are more common to me than hardware issues. Thus if the OS fails the RAID is gone and so is the data, whereas - on a hardware level regardless of OS - the data remains.

However on a recent Stack Overflow podcast they stated they would not use hardware RAID as software RAID is better developed and thus runs better.

So my question is, is there any reasons to choose one over the other?

masegaloeh
  • 17,978
  • 9
  • 56
  • 104
Robert MacLean
  • 2,186
  • 5
  • 28
  • 44

10 Answers10

42

I prefer software RAID.

Software RAID has the big advantage of not being tied to a particular set of hardware. For example, I've had controller and/or mainboard failures which result in loss of the array.

Today's CPUs are plenty fast enough to handle parity on RAID-5 variants. I've also never had any issue with bus saturation from multiple concurrent reads.

Jason Weathered
  • 750
  • 1
  • 6
  • 8
  • 5
    +1 - I had a situation once where a raid controller died and we had to source a new raid controller before we could get the server back online :( – Shard Apr 30 '09 at 22:36
  • +1 Jeff mentioned in a recent S.O. podcast that he's joined the camp of Linux-style software RAID (i.e. the OS emulates a virtual disk based on a series of physical ones—a virtual disk which is bootable and usable as the root filesystem). – jhs May 05 '09 at 06:01
  • 7
    In 16 years of server administration, the only RAID controllers I have had fail were controllers that were "cheap"...or not from HP or Dell with one exception. I had a Compaq Prolient server that was 8 years old...and its Smart 2H finally died one night when the AC unit next to it bought the farm and the server over heated. Hardware raid takes a lot of the guess work out at 3 in the morning. – Thomas Denton May 19 '09 at 03:14
  • 1
    I'm struggling with an HP SmartArray Controller that marks brand new drives as failed without reason. The System is more or less evacuated because of this situation. Paid support denies possibility of controller failure and has replaced every single part (mainboard, too) except that controller within the past 3 months. HP can go wrong, too. – korkman Apr 17 '10 at 14:14
  • +1 it's easier to find a compatible OS/software than it is to find another compatible RAID card when the shit hits the fan... however, issues with software RAID include identifying faulty disks and having perfectly functional hot-swapped rebuild - which isn't easy even today regardless of OS – Oskar Duveborn May 11 '10 at 11:36
25

I prefer HW raid, 'cause if you have to pull good disks out of a dead machine you're not limited to the OS configuration of the raid "array".

You do keep backups of your RAID controllers config, don't you?

So just load that up on a donor machine, slot in the drives (in the right order! You did label your drives before your pulled them right?) and restart on a clean OS and your data is recovered.

THE OS DRIVES ARE NOT IMPORTANT DRIVES TO KEEP. THE MOST IMPORTANT STUFF TO KEEP IS THE DATA DRIVES!!!!

(You do backup your DATA drives, right?)

Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
Guy
  • 2,658
  • 2
  • 20
  • 24
  • 18
    Linux software raid has this information in the superblock. You don't even need to connect the drives in the same order; it should automatically detect that the drives were part of an array and which drives were in which slots. – Captain Segfault Aug 01 '09 at 01:22
  • 1
    @Guy, i've never had this work for me ever... you must be have magic fingers – The Unix Janitor Apr 16 '10 at 20:11
  • 1
    I've had that work on a Dell server and an IBM server, though it was pretty hairy. I've also had it not work. On the software side, any time I've moved Linux RAID disks (that they weren't messed up from the old machine) it worked fine. – Bill Weiss May 11 '10 at 13:01
12

An important consideration is reliability; in the end, both the hardware RAID and the software RAID are just software implementations of the algorithm. Therefore both are susceptible to bugs in software.

After many years of running software RAID setups in Linux I've never run into a bug that caused data loss. But I've seen several cases of complete data loss in a very expensive hardware RAID from a reputable manufacturer.

Two lessons to learn from this:

  • RAID is not a backup strategy.
  • Just because it's in hardware is no guarantee that it works correctly.
Jared Oberhaus
  • 596
  • 6
  • 14
  • 4
    +1 for RAID is not a backup strategy. – Thomas Denton May 19 '09 at 03:18
  • +1 for "Just because it's in hardware is no guarantee that it works correctly" One of the major international hardware vendors had one of our $10,000 servers back at their headquarters for over a month and they couldn't get the hardware raid to work properly, not matter how many replacement parts they put in it, even after wiping everything. That server was shipped back to the factory/engineers with a WTF note attached for analysis. They had to send us a completely new upgraded server as a replacement (due to our model not being sold anymore). – BeowulfNode42 Dec 15 '20 at 02:20
9

Hardware RAID controllers usually come with battery backed RAM cache which speeds up write operations, even when using software RAID, so if I can, I always try to get hardware RAID with battery cache, and than run software RAID on top of it if controller firmware isn't up to task.

dpavlin
  • 274
  • 2
  • 5
  • 3
    This is my preferred way, too. Use BBWC plus software RAID. Linux Software RAID 10 actually outperformed HW RAID 10 in a recent 10 disk setup of mine. A further advantage of Software RAID is being able to create multiple RAIDs on same disks with partitions. – korkman Apr 17 '10 at 14:29
8

I think Jeff's experience with his RAID arrays is down to getting (and relying) on crap/cheepest RAID controllers. "Wow, this RAID array does a billon gigaflops a second and I got it for £10 on eBay!".

If you value your data - get a good, proven, reliable RAID controller.

Even better, get two (with fail over)

Even better, get with the 21st century and get a dedicated external FC / iSCSI connected disk array with built in fault tolerance / ZSPOF - dual path, dual RAID, RAID6 or 10 (or 20 or 50), and hot spares.

Yes, it's expensive. But how expensive would it be if the entire SO site was trashed.

Guy
  • 2,658
  • 2
  • 20
  • 24
5

Software RAID has failed to do its job for me on a number of occasions, hardware RAID never has. That said, cheapo hardware RAID is as worse than good software, spend a few £$€ to get good controllers.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
5

It depends. For simple mirroring scenarios, I prefer software RAID, because as Jason W said, you can always remove one of the drives and stick it in another machine.

For other scenarios (RAID 0, RAID 5, or RAID 10), a single drive isn't much use on its own anyways, so I prefer hardware RAID.

Regardless (and I say this with all due respect and love guys), you shouldn't make your decisions based on what Stackoverflow -- a group of software guys -- has or hasn't done.

Portman
  • 5,263
  • 4
  • 27
  • 31
  • PW is right.."software guys" ...:) Really most of the time it comes down to an economic decision...How much redundancy can your risk budget and dollar budget afford. – Thomas Denton May 19 '09 at 03:17
4

Software Raid is dependant on the OS. Hardware Raid is dependant on the card and OS driver.

That is what it comes down to. It is very easy to get a replacement OS. Reinstall. A replacement raid card, especially after a few years can be impossible.

Some raid cards will hide the whole array from the OS but the driver will still know that it is raid. The best cards will handle all the low down stuff such as writing across disks, parity etc where as the worst will make the OS do everything.

The lower cards have a huge tendency to mess with the parity numbers and get them wrong. Imagine a few TB of data looking ok until you try to open it. Nightmare.

3ware cards are expensive but useless. The throughput speed is really bad under high load on windows and will pretty much lock up a system in linux if you enable nfs. Dell Perc cards (version 5 and 6) are great however. The earlier ones cheated a bit on Raid 10.

Ryaner
  • 3,027
  • 5
  • 24
  • 32
3

I've seen hardware RAID cards fail and take out an entire array. You are definitely adding another single point of failure with a hardware card, unless you're in a redundant configuration.

You should be aware that there's "hardware RAID" and then there's HARDWARE RAID. Google "fakeraid" for more info. Some "hardware RAID" cards actually do very little RAID processing in the card, and use custom drivers to do the RAID calculations in software, using the system's regular processor. This can lead to strange results. I had one of these system (a Windows 2003 server) start showing separate C and D drives, instead of one C drive, because something got confused somewhere. That should never be possible with a true hardware RAID, as it appears as one physical drive to the system.

I have very little experience with software RAID. I've been prejudiced strongly against it in the past, but am now moving towards using it, based on things I've heard here and elsewhere. I'd consider testing it for a future deployment.

On the other hand, I've moved away from any kind of in-server RAID to external RAID systems. Almost all my servers have zero drives installed. I'm in love with Xiotech systems, but other types of externals RAIDs have also served me well. I've never (knocking on wood) lost data from one yet.

Schof
  • 962
  • 1
  • 6
  • 10
  • 2
    soft RAID in Linux has been "tested good" for a number of years and adds very little overhead, while being very quick. Recommended if you are running Linux servers. – Avery Payne May 29 '09 at 20:44
3

The answer is quite different on Linux/Unix and Windows. S/W RAID on Linux is much better than Windows S/W RAID, which is limited in its support for different layouts and very slow (at least it is on Windows 2003 server). On Windows you are much better off with H/W RAID in just about every case.

Software RAID on Linux and Unix works much better than it does on Windows. This makes S/W RAID a reasonable choice on these platforms, although on a larger installation you will probably be better off with H/W RAID or a SAN.