In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice.
Long answer: when computing power was limited, hardware RAID cards had the significant advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc).
However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) has XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s per single execution core.
The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache gives very low latency for random write access (and reads that hit) and basically transforms random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only come without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: they totally disable the disk's private cache and (if newer firmware has not changed that) actively forbid to re-activate it. Do yourself a favor and do not, ever, never buy such controllers. While even higher-end controllers often disable disk's private cache, they at least have their own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.
This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSDs really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controllers let you re-enable disk's private cache (eg: PERC H700/710/710P), if that private cache is volatile you risk to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concern.
Open source software RAIDs, on the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so rather than disabling it, they use ATA FLUSH / FUA commands to write critical data on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent.
However, if used with mechanical HDDs, synchronized, random write access patterns (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with enterprise SSDs (ie: with a powerloss protected write cache), software RAID often excels and give results even higher than hardware RAID cards. Unfortunately consumer SSDs only have volatile write cache, delivering very low IOPS in synchronized write workloads (albeit very fast at reads and async writes).
Also consider that software RAIDs are not all created equal. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS parity RAID (ZRAID) is extremely advanced but, if not correctly configured, can give you very poor IOPs; mirroring+striping, on the other side, performs quite well. Anyway, it need a fast SLOG device for synchronous write handling (ZIL).
Bottom line:
- if your workloads are not synchronized random write sensitive, you don't need a RAID card
- if you need a RAID card, do not buy a RAID controller without WB cache
- if you plan to use SSD, software RAID is preferred but keep in mind that for high synchronized random writes you need a powerloss-protected SSD (ie: Intel S/P/DC, Samsung PM/SM, etc). For pure performance the best choice probably is Linux MD Raid, but nowadays I generally use striped ZFS mirrors. If you can not afford losing half the space due to mirrors and you needs ZFS advanced features, go with ZRAID but carefully think about your VDEVs setup.
- if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches.
- if you need RAID6 when using normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has a high write performance penalty, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal).
- if you need RAID6 with HDDs but you can't / don't want to buy a hardware RAID card, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup.