1

I am choosing between two similar models of servers, one of which has software RAID, while the other hardware one.

Servers in question are SYS-E32-1 and SYS-E32-3 from So you Start (an OVH brand) and have similar configuration, the biggest differences being absence/presence of Hyper-Threading and hardware RAID:

There are two hard drives in both of the model, which I'll use as either RAID1 or RAID0 under LVM. I am going to run CentOS 6 and other OS guests on a CentOS 6 host with KVM virtualization. Our normal load is the usual web services.

How much should the performance differ for me? Is hardware RAID worth it or more like a rounding error in this scenario?

Clarification: We are not considering "fake RAID" here; only pure software and hardware RAID.

Nickolai Leschov
  • 457
  • 4
  • 8
  • 22
  • So with this be an ESX host? Here is the issue that I have run into: Many host operating systems do not support software based RAID at all because—and here is the catch—software based RAID has some codependency to some proprietary hardware on the motherboard. Meaning until the manufacturer open sources their tech internals, you cannot use open source software RAID on a setup like that. Which means the best bet is always a hardware RAID card. It will always be supported by Linux kernels so no headaches. – Giacomo1968 May 03 '14 at 19:07
  • @JakeGould I would be running KVM, not ESX. Maybe you are talking about "[fake RAID](http://serverfault.com/questions/9244/how-do-i-differentiate-fake-raid-from-real-raid)"? I cannot see how software RAID would be dependent on a proprietary hardware: at least in Linux there seems to be this Linux thing - `mdraid` - that doesn't require any special hardware and works well. I was advised to use that rather than _fake_ RAID, which would _pretend_ to be hardware, but let the CPU do all the work anyway. I am concerned whether real hardware RAID will offer noticeable benefits in 2disk RAID0/1 system – Nickolai Leschov May 03 '14 at 19:44
  • Ahhh, okay. So if it is pure software RAID it is indeed all a performance question. So fair enough. – Giacomo1968 May 03 '14 at 19:47
  • Run your application on both and measure what happens. – user9517 May 03 '14 at 23:06
  • 1
    possible duplicate of [Can you help me with my capacity planning?](http://serverfault.com/questions/384686/can-you-help-me-with-my-capacity-planning) – user9517 May 03 '14 at 23:06
  • @Iain I cannot afford to rent servers just to do a test; there are long-term contracts or setup fees involved. I fail to see how my question may already have an answer there: there's nothing about RAID in the linked question, and my question is very specifically about RAID. If you voted down my question, please vote it back up: it's a specific and well-researched question. – Nickolai Leschov May 03 '14 at 23:25
  • You fail to see because you don't understand. Without knowing your workload we can't really advise. – user9517 May 03 '14 at 23:30

2 Answers2

4

If you have hardware raid then you'll most probably have a hardware controller with hardware cache on it. If that controller has a BBU (battery that retains the contents of the cache on power loss) then the performance difference will be huge.

Having cache with BBU will speed up most of you things a lot. That's because all disk syncs/fsyncs will be instant, meaning that all disk changes for databases, syslog, filesystem journal, etc will be much much faster and won't need to flush the whole write cache to disk.

Having said that, the above is because of the BBUed cache. I.e. the performance should be equally good if you implement software raid on top of the hardware RAID controller. I.e. if you decide to do SW raid when having a HW raid controller.

Regarding their differences, HW raid is usually hassle free and usually comes with hot-swap-ability. You'll have to test that yourself with SW raid as the underlying hardware may have issues.

AFAIC (and I'm propagating some rumors here) HW raid manufactures dedicate a lot on this and take into consideration disk manufacturers and even specific disk models. They are implementing a number of hacks to overcome issues with disks, both for performance and reliability. On the other hand, SW raid will be always improving with kernel upgrades while OTOH it has been said a number of times that HW raid firmware is really bad.

Finally, HW raid comes usually with some logging meaning that you can look at the logs to figure out if a disk is misbehaving. Then again on SW raid you can just run dmesg or look at syslog.

V13
  • 231
  • 1
  • 5
3

There should be no performance difference between hardware and software raid, but in reality since most hardware raid controllers will have a dedicated cache they will be significantly faster in specific circumstances.

Although if the hardware controller does not have a dedicated cache then the performance should be exactly the same, BUT only for RAID 0,1,10 and 0+1 since they require no processing as there is no parity calculation.

TechAUmNu
  • 52
  • 3