86

I've been reading a lot on RAID controllers/setups and one thing that comes up a lot is how hardware controllers without cache offer the same performance as software RAID. Is this really the case?

I always thought that hardware RAID cards would offer better performance even without cache. I mean, you have dedicated hardware to perform the tasks. If that is the case what is the benefit of getting a RAID card that has no cache, something like a LSI 9341-4i that isn't exactly cheap.

Also if a performance gain is only possible with cache, is there a cache configuration that writes to disk right away but keeps data in cache for reading operations making a BBU not a priority?

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
ItsJustMe
  • 991
  • 1
  • 7
  • 8
  • Something that I have noticed that favors HW raid: In my experience, if you're running SW raid and the system does anything other than a clean shutdown you'll fault the array and have to rebuild. HW raid doesn't fault if it wasn't writing when the system went down. – Loren Pechtel Apr 25 '15 at 22:15

6 Answers6

158

In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice.

Long answer: when computing power was limited, hardware RAID cards had the significant advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc).

However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) has XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s per single execution core.

The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache gives very low latency for random write access (and reads that hit) and basically transforms random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only come without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: they totally disable the disk's private cache and (if newer firmware has not changed that) actively forbid to re-activate it. Do yourself a favor and do not, ever, never buy such controllers. While even higher-end controllers often disable disk's private cache, they at least have their own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.

This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSDs really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controllers let you re-enable disk's private cache (eg: PERC H700/710/710P), if that private cache is volatile you risk to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concern.

Open source software RAIDs, on the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so rather than disabling it, they use ATA FLUSH / FUA commands to write critical data on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent.

However, if used with mechanical HDDs, synchronized, random write access patterns (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with enterprise SSDs (ie: with a powerloss protected write cache), software RAID often excels and give results even higher than hardware RAID cards. Unfortunately consumer SSDs only have volatile write cache, delivering very low IOPS in synchronized write workloads (albeit very fast at reads and async writes).

Also consider that software RAIDs are not all created equal. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS parity RAID (ZRAID) is extremely advanced but, if not correctly configured, can give you very poor IOPs; mirroring+striping, on the other side, performs quite well. Anyway, it need a fast SLOG device for synchronous write handling (ZIL).

Bottom line:

  1. if your workloads are not synchronized random write sensitive, you don't need a RAID card
  2. if you need a RAID card, do not buy a RAID controller without WB cache
  3. if you plan to use SSD, software RAID is preferred but keep in mind that for high synchronized random writes you need a powerloss-protected SSD (ie: Intel S/P/DC, Samsung PM/SM, etc). For pure performance the best choice probably is Linux MD Raid, but nowadays I generally use striped ZFS mirrors. If you can not afford losing half the space due to mirrors and you needs ZFS advanced features, go with ZRAID but carefully think about your VDEVs setup.
  4. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches.
  5. if you need RAID6 when using normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has a high write performance penalty, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal).
  6. if you need RAID6 with HDDs but you can't / don't want to buy a hardware RAID card, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup.
shodanshok
  • 44,038
  • 6
  • 98
  • 162
  • Many thanks for the great explanations, I had no idea RAID cards disabled the cache on the HDDs. This isn't the type of server that warrents the near $800+ investment so I'll be reading up on Software RAID setups a bit more and probably go with that. – ItsJustMe Apr 24 '15 at 14:18
  • @ItsJustMe: some applications also disable disk write caching, if the drive supports Force Unit Access. Microsoft Active Directory, Exchange come to mind. – Greg Askew Apr 24 '15 at 17:11
  • 2
    The OP is talking about a hypervisor. RAID5 should be out of the question, and write cache is going to be a must. – ewwhite Apr 24 '15 at 19:02
  • I wish I could upvote more than once. As a sysadmin, I'd always assumed hardware RAID would outperform software RAID, though I've tried to avoid the H300 cards after hearing some other people's horror stories with them. This will definitely give me good food for thought and help better inform future purchase decisions. Thanks! – user24313 Apr 24 '15 at 20:35
  • Some RAID cards allow SSDs to be used as fast caches (latest DELL PERCs come to my mind), for software RAID and other cards doesn't allow this, there's always Linux bcache. – bayindirh Apr 25 '15 at 23:02
  • 2
    In reality, even in 2016, a 6-drive software RAID 5/6 writes at < 25 MB/s while a proper hardware RAID card from 2010 writes at > 500 MB/s. This is on both Intel RSTe and Windows Storage Spaces. I just don't understand what the bottleneck is on a modern CPU. – Monstieur Mar 15 '16 at 05:29
  • 2
    The problem with software RAID 5/6 is that writes often trigger a read-modify-write, which in turn slow down the disks considerably. A BBU-enabled hardware RAID controller can coalesce multiple writes in a single disk access/transaction, greatly improving performance. – shodanshok Mar 15 '16 at 09:28
  • 1
    _"[random reads] when used with SSDs, they often excel"_ - If the SSD isn't an enterprise SSD (usually that means it doens't have a capacitor for power loss protection) and doesn't lie, then even SSDs can have extremly low IOPS for operations like sequential `fsync()`. [See this article](https://www.percona.com/blog/2018/02/08/fsync-performance-storage-devices/), which shows a Samsung NVMe SSD without capacitor do only ~250 fsyncs per second (I too have measured this). SSDs with capacitor give ~30x more fsyncs/s, a hardware RAID controller with battery 100x more. – nh2 Jun 06 '18 at 02:05
  • @nh2 surely a powerloss protected cache is critical for high sync write performance. I updated the answer with that note; thanks. – shodanshok Aug 30 '19 at 11:24
  • 1
    Possible scenario that could be added to your answer: if you need sequential writes on software RAID backed by HDDs, consider using LVM on top of the RAID layer, and introducing a pair of small SSDs into a RAID1, using that storage as a "writeback" LVM cache on top of the HDD's LVM volume. This allows the OS to quickly return success for writes once the SSDs return success, and the LVM layer can push the writes to the HDDs using spare IOPs sometime later. (Mirroring the SSDs is required because the loss of the cache will corrupt the underlying volume if there is unwritten data in the cache.) – cdhowie Sep 05 '19 at 03:33
  • 1
    I have not checked @cdhowie's approach with LVM, but I am successfully using a similar approach using ext4's _external journal_ feature, with a RAID1ed SSD as journal on top of a RAID1ed HDD; that means that any write (like `fsync()` returns extremely quickly). – nh2 Sep 05 '19 at 11:12
7

You'll want a battery or flash-backed cache solution for any hardware controller you purchase. Most regret not doing so.

But to answer your question, most controllers have configurable cache ratios... so 100% read cache and 0 % write cache negates the need for BBU protection. Your write performance will just suck.

I can't address your software RAID question because it depends. Linux MD RAID is different than Windows Software RAID, which is different than something like ZFS. Solutions like ZFS can perform better than hardware because they leverage the server's RAM and CPU resources.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • By "write performance will just suck" you mean that'll be about the same as Software RAID or Hardware RAID without cache? Or is there a penalty to write performance beyond that if the card is dedicating the cache to reading? – ItsJustMe Apr 24 '15 at 11:26
  • It depends on what you're doing. If you don't have a write intensive application, then the performance hit may not be a problem. – ewwhite Apr 24 '15 at 11:34
  • It's a Proxmox HOST with Windows VMs used for mail and web hosting. There's no much database usage but the e-mail service does probably have lots of write activity. Currently I'm just debating if having a Read only cache card is worth it over Software RAID. – ItsJustMe Apr 24 '15 at 11:37
  • Use a Flash-backed RAID controller for virtualization. – ewwhite Apr 24 '15 at 11:38
  • 1
    We ran a cyrus mail server with about 4000 accounts on it using software RAID. Active accounts hitting it any day was more like 300 to 600. Performance was noticeably worse than our primary cyrus mail server with hardware RAID and a BBU. The BBU and RAID controller cache gives data assurance, but it also gives performance. This is because once the data arrives at the controller, it can tell the OS the write is complete. Otherwise it would have to wait for the hard drive to signal the write is complete. This saves significant clock cycles. Moved to hardware RAID and solved. – labradort May 14 '15 at 17:56
7

The RAID-controller you have your eye one is a cheap one and is basically a fakeraid. It even depends on your mainboard to provide some functions like memory and not a lot of mainboards have support for it which results in that you can't load the driver.

About HW vs SW-RAID itself. I'm not using HW-RAID anymore unless it is a box with an EMC logo on it for example. For everything else I just switched back to SW-RAID many moons again for a few very simple reasons.

  1. You need additional hardware and need to match them. You also need to match the firmware and keep that in sync. A lot of disks will not work correctly and you will spikes in your IO-latency for no clear reason.

  2. Additional hardware is expensive so you can use that additional $1000 (decent controller with two/three disks) for a small solution better. Invest it in more disks and standard controllers, ECC memory, faster CPU. And an on-site spare disk maybe if you plan to run it for a longer than the warranty period or don't want to pay the express fees for overnight shipping.

  3. Upgrading is a pain as you need to keep track of OS-patches and firmware for both disk and controller. It may result in a situation where upgrading/updating isn't possible anymore.

  4. On disk formats. Enough vendors use some in-house layout to store data that is tied to a revision of your hardware and firmware combination. This may result in a situation where a replacement part makes it for you impossible to access your data.

  5. It is an SPOF and a bottleneck. Having only one controller behind only one PCI-bridge doesn't gives you the performance and redundancy you really need. With this also comes to no migration path exists to migrate data to another diskset outside the controllers reach.

Most of these point have been taken care of with newer generations of SW-RAID software or solutions like ZFS and BtrFS. Keep in mind that in the end you want to protect your data and not fast accessible, but redundant garbage.

hspaans
  • 251
  • 1
  • 7
  • 4
    I disagree. Many people are happy with Dell, HP, IBM and higher-end LSI RAID controllers. But honestly, most modern quality servers already have onboard RAID solutions, so the idea of shopping for an individual controller is a bit dated. Software RAID solutions also need to account for low-latency write workloads. ZFS has a ZIL, but many other software RAID implementations are lacking on that front. – ewwhite Apr 24 '15 at 12:30
  • 2
    I would also differ with your last paragraph, RAID is availability not protection. Protection requires backups not RAID. – Rowan Hawkins Jan 04 '18 at 23:35
  • @ewwhite You all speak about Linux SW-RAID. Any of you have feedback on vendor-specific SW-RAID, like HPE's Smart Array S100i SR Gen10 Software RAID? Is this something you want to rely on, when it comes you your data? As far as I see, it is supported on Hyper-V Server 2019, which is my area of interest. For Linux there are some packages that I am not sure if are top-quality, but my hypervisor will not be based on Linux anyway... https://h20195.www2.hpe.com/v2/gethtml.aspx?docname=a00019427enw – NoOne Mar 15 '20 at 19:17
4

I have spent the last year (off and on through 2014-2015) testing several parallel CentOS 6.6 RAID 1 (mirrored) configurations using 2 LSI 9300 HBA verses 2 LSI 9361-8i RAID controllers with systems built on the following: 2U Supermicro CSE-826BAC4-R920LPB chassis, a ASUS Z9PE-D16 motherboard, 2 Intel Xeon E5-2687W v2 Eight-Core 3.4 GHz Processors, mirrored Seagate ST6000NM0014 6TB SAS 12Gbs, 512 GB RAM. Note this is a fully SAS3 (12Gbps) compliant configuration.

I have scoured through articles written about tuning software and I have used Linux software RAID for over 10 years. When running basic I/O tests (dd-oflag=direct 5k to 100G files, hdparam -t, etc.), software RAID seems to stack up favorably to hardware raid. The software RAID mirrored through separate HBAs. I have gone as far as to do testing with the standard CentOS 6 kernel, kernel-lt and kernel-ml configurations. I have also tried various mdadm, file system, disk subsystem, and o/s tunings suggested by a variety of online articles written about Linux software RAID. Despite tuning, testing, tuning and testing, when running in a read world, transaction processing system (having a MySQL or Oracle database), I have found that running a hardware RAID controller results in a 50 times increase in performance. I attribute this to the hardware RAID optimized cache control.

For many, many months I was unconvinced that hardware RAID could be so much better, however, after exhaustive research on Linux software RAID, testing and tuning, those were my results.

Brent
  • 41
  • 1
2

Most of the writers here are just ignorant of "write hole". This is the basis which allows for crying out for battey backup units of hardware RAIDs vs. absense of a such for software RAIDs. Well, for e. g., Linux software RAID implementation either supports bitmaps of write operations or does full "parity" re-calculation in case of not-clean shutdown. ZFS always strives to full-stripes-writes to avoid this inconsistency or postponing it's re-checking. So, as a summary, smart-enough software RAID nowadays is often good enough to be used instead of "who knows what's inside" so-called "hardware RAID".

As to the cache part of the question, it really doesn't matter so much, cause OS itself write cache can be much more bigger than "hardware" adapter has.

poige
  • 9,171
  • 2
  • 24
  • 50
  • 1
    This is another reason to avoid hardware RAID cards without a proper protected WB cache. A note on Linux MD Raid: it is not totally immune from write hole. As it has not power loss protection, in the event of a sudden power loss data will eventually be lost (think to in-transit data and/or partial writes). Sure this will happen even in single-disk scenario, but the parity nature or RAID5/6 amplify this. In the worst scenario, crical filesystem metadata can be corrupted, however modern filesystems are resilient enough to recover quite nicely. Some data can be lost, though. – shodanshok Apr 24 '15 at 21:51
  • @shodanshok, you're totally wrong. Think – poige Apr 25 '15 at 09:16
  • I'm wrong on what, precisely? Linux mdraid remains somewhat vulnerable to write-hole. Please read the linux-raid mailing list for further details. – shodanshok Apr 25 '15 at 11:55
  • Write hole in **regards of RAIDs with parity** means **parity mismatch due to incomplete write operations occuring**. That's why it uses either bitmap optimized or whole array parity recalc. after unclean shutdowns. – poige Apr 25 '15 at 14:14
  • http://www.raid5faq.com/reliability.aspx : „… What is a RAID5 write hole? RAID5 write hole is a condition of a RAID5 array when some of the parity blocks do not match the corresponding data. Most often a write hole occurs when a power failure happens during the write operation. In this case a controller has time to write data only but not parity. To avoid accumulating write holes, one should use an UPS (Uninterruptible Power Supply) or BBU (Battery Backup Unit). Additionally, periodical array rebuilds correct accumulated inconsistencies. …” – poige Apr 25 '15 at 14:18
  • https://blogs.oracle.com/bonwick/entry/raid_z : „… RAID-5 (and other data/parity schemes such as RAID-4, RAID-6, even-odd, and Row Diagonal Parity) never quite delivered on the RAID promise -- and can't -- due to a fatal flaw known as the RAID-5 write hole. Whenever you update the data in a RAID stripe you must also update the parity, so that all disks XOR to zero -- it's that equation that allows you to reconstruct data when a disk fails. The problem is that there's no way to update two or more disks atomically, so RAID stripes can become damaged during a crash or power outage. …” – poige Apr 25 '15 at 14:19
  • it seems you misunderstand me... 1) I 100% agree that a power-loss protected WB cache _prevents_ write holes. I also wrote that in my first reply; 2) I 100% agree that ZFS is immune to write holes. However, this and other design decisions have the net effect of lowering total achievable per-vdev IOPS. This is why I suggest fast devices for ZFS (or at least a fast ZIL device). – shodanshok Apr 25 '15 at 16:27
  • 3) About Linux RAID: while it is true that, after a power loss, it can heal itself and bring the array in a consistent state, this does not guarantee 100% data integrity / availability. Even if the array is in a consistent state, some (very small) data chunks can be lost (albeit with small probability). This is not due to deficiencies in Linux RAID itself - it is a mere consequence of a power loss without a protected cache. One possible solution is to have a raid-journal device, which was discussed in various mailing threads. For example: http://marc.info/?l=linux-raid&m=142792520411231&w=2 – shodanshok Apr 25 '15 at 16:36
  • No, you got me totally wrong. What battery does is only increase the time window duing which data can reach "permanent storage". It doesn't guarantee any integrity by itself. Not a silver bullet at all. – poige Apr 26 '15 at 02:54
  • 1
    Many batteries, if maintained in good state, can power up the WB cache for 24-96 hours, which is plenty of time to restore power except _really_ extreme situations. Moreover, modern controller switched to NVRAM (read: flash) memory as long-term storage, so in case of a power failure a small battery / supercap will flush the cache content in a NV memory that can retain data _for months or years_ In other word, a BBU RAID controller _will_ prevent RAID5/6 holes in (almost) all circumstances. – shodanshok Apr 26 '15 at 06:36
  • Even for years — it doesn't matter. The only thing which matters whether the cache has complete transaction inside or not. As I told you — think. – poige Apr 26 '15 at 13:25
  • Sorry, but this is nonsense. I invited you many times to read about the argument in the linux-raid mailing list, so I am not going to reiterate. Good afternoon. – shodanshok Apr 26 '15 at 14:41
  • Ah, nonsence, ah. One question for a braniack: how does cache distinguish between data and meta-data? Good luck. – poige Apr 26 '15 at 15:11
  • The RAID card itself if a small computer - a state machine. When power disappear, its state is preserved by the means of battery or supercap + NVRAM. When power is provided, the old state is reloaded and the state machine restart from the very same point. In-memory data are flushed to disks, and execution restart. It does not need to understand what the to-be-flushed data are - if real user data or filesystem metadata. It only need a reliable method to give (an impression of) atomicity. Please note that even _software RAID developers agree on that_. – shodanshok Apr 26 '15 at 17:41
  • If you are of a different opinion, please provide evidence - "think" is not a reply. Give us the pre and post power loss conditions of the state machine - and why it should not work as expected. Please also note that about Linux MD RAID and write holes ("mdraid is immune to write holes") you have a different position from that of its developers ("it is vulnerable but it should not be a real problem")... this is a little difficult to sustain, don't you? – shodanshok Apr 26 '15 at 17:48
  • You were talking about FS metadata now you're talking about RAID's metadata. C'mn, come up with something consistent. I told you that LSR solves "write hole" which is by definition parity inconsistency using parity recalculation either optimized w/ write intent bitmap or w/o it. This is my statement, prove me it's wrong, ah? – poige Apr 27 '15 at 01:18
  • Linux software RAID does **not** solve write hole in the same manner as a power loss protected WB cache does. **This is acknowledged by its developers in the Linux mailing list** I've also linked you a thread proving that. Did you read it? – shodanshok Apr 27 '15 at 06:02
  • I never said it does it in the same way. Did you read what I said? – poige Apr 27 '15 at 07:22
  • 1
    You said it is immune to write hole. **Developers say it is not**, albeit they said that it should not matter much in real world usage. – shodanshok Apr 27 '15 at 07:41
  • Did you ever consider a possibility that developers of LSR meant something more than it's usualy taken when ppl talk about RAID's "write hole"? I gave you two quotes from different sources already. Did you try understanding what they were talking about? Do you understand that LSR's parity "post"-recalc solves the issue those 2 sources were defining as "write hole"? – poige Apr 27 '15 at 08:39
  • No. Neil Brown (Linux software RAID main developer) clearly stated the LSR **is** vulnerable to write holes, albeit in a manner that (in his opinion, and I agree with him) should not cause much concert and it is not worth the massive performance hit taken by a raid-level journal. Power-protected WB caches **solves** the write holes problem, and that is acknowledged by the very same LSR developers. Anyway, I don't want to convince you - in fact, I am a great fan of Linux software RAID. – shodanshok Apr 27 '15 at 09:17
  • As I told you already, there's possibilty that Neil Brown uses term "write hole" more broad than authors of two quotes I cited. – poige Apr 28 '15 at 14:17
  • No. The term "write hole" has a single meaning. Anyway, I can not speak for other. If you want, you can reach him on the mailing list. Bottom line is, however, that BBU WB caches are _not_ prone to write holes. – shodanshok Apr 28 '15 at 15:04
  • ok, let's clarify this: in your opinion is the „write hole” about RAID's „parity” inconsistency only or not limited to? – poige Apr 28 '15 at 15:45
  • RAID write hole have two failure mode: 1) a catastrophic one, when undetected mismatch will result in array rebuild failure if a disk fails, and b) a data corruption one, when stale data can appear. Linux software RAID prevent the first, catastrophic scenario - and by using write bitmaps, it do this quite efficiently. It however does not protect from case n.2 - data alteration / corruption. – shodanshok Apr 28 '15 at 18:08
  • Scenario: an outstanding write is interrupted by a power loss, so that only part of the data are written to the RAID5 array. Say that 2 data + 1 parity chunk should be written, but the power interruption left one data chunk with old/stale data. When the array restart, one stripe will have inconsistent data/parity. The array can heal itself, but this only means that the affected stripe's parity will be recalculated to _match the inconsistent data (new+old) on data chunks_. It now an upper layer task (filesystem/application) to detect the inconsistent data and to rollback the write - if it can. – shodanshok Apr 28 '15 at 18:20
  • I asked you very clear and simple question. I don't see clear and simple answer. I'll ask you again: «in your opinion is the „write hole” about RAID's „parity” inconsistency only or not limited to?» – poige Apr 28 '15 at 18:23
  • An hardware RAID card with BBU (or supercap) WB would be immune from this problem: when power returns, it will simply reply the in-flight writes to the hard disks. Only when all data are correctly flushed to disk it will remove the entire cached stripe from its protected cache. – shodanshok Apr 28 '15 at 18:23
  • I replied very clearly. Write hole have two failure modes, due to inconsistent parity _and_ partial data writes. I have demonstrated that a protected WB cache give _added_ protection respect to software RAID only, while you support a very different claim - that software RAID is more reliable than a proper (powerloss protected) hardware RAID. I (naively?) considered your question as a proper one, a question waiting for a good reply. Have a good day. – shodanshok Apr 28 '15 at 18:28
  • You're imagining. You didn't give clear answer. There's no understanding in what you say, just "learnt-by-heart" what you've been told once. No thinking, just replaying what you heard from someone. – poige Apr 28 '15 at 18:29
  • 1
    Sorry, but **you** wrote: _"This is the basis which allows for crying out for battey backup units of hardware RAIDs vs. absense of a such for software RAIDs"_. **This is WRONG**. Proper hardware RAID cards give _added_ protection, and I told you multiple times that this is opinion of the very same LSR developers. Now I ask you a question: **your statement is good or wrong?** As it is wrong, please edit your answer or other user can be fooled by that nonsense. – shodanshok Apr 28 '15 at 18:33
  • You're mixing RAID's metadata and data it holds. Even if RAID controller has BBU and gotten write request to perform it doesn't mean FS data can't be corrupted cause power loss can happen during data transfer between system and controller. Even if individual transfer completed (even if only to BBU) it doesn't mean it wasn't followed by another that didn't have chance to happen. All-in-all: LSR solves write hole by re-calculating its metadata when dirty shutdown has happened. – poige Feb 16 '19 at 16:10
0

I work this all the time. It depends heavily what your doing and the raid level your supporting. A SW controller running a Raid 0 or 1 for for the OS and nothing special is fine. Running a SW controller with a Raid 5 on a database is asking for trouble! SOME hardware controller do give you better performance but it depends if it can cache and the processor chipset of the raid card. Also not all software controller are supported by all every os. So sometimes you may have to buy a HW to run ESXi... Unless you use sata connections.