2

We have a few servers with PERC H710s, we're moving to SSDs and I wonder, is caching needed given that we're not using spinning disks? Our general setup is as follows

2 X 2TB SSD in RAID 1 Read Policy: No Read Ahead. Write Policy: Write Through Disk Cache Policy: Disabled

Is there any benefit to having caching enabled? Are there any issues we might face by not having caching enabled? Are we more likely to suffer data loss (that could be mitigated by having caching on) if a power outage were to happen and our BBUs somehow also failed?

DipreSantana
  • 21
  • 1
  • 2
  • 1
    For background - if you do caching you will probably want a battery-backed cache (BBWC) and/or flash-backed write cache (FBWC) https://serverfault.com/q/581524/37681 – HBruijn Jan 15 '18 at 22:06

2 Answers2

2

Is there any benefit to having caching enabled?

Yes, there is still a potential for performance benefit with the on-controller caching. SSDs do perform significantly faster than spindle drives, but the transfer speed in and out of controller cache is still significantly faster.

There's also a potential for reducing the write wear on the SSDs, depending upon your workload. Writes to controller cache aren't necessarily flushed to disks immediately. If multiple writes are often issued to the same blocks within a short period of time, some of those writes would only happen while the data is in cache and only the last change is written to the SSDs. The true write wear savings would be a bit difficult to measure/compare though.

Are there any issues we might face by not having caching enabled?

Nothing that's specific to the use of SSDs, no. You'd only be looking at the same negative effects as not having caching with spindle disks, though obviously much less severe.

Are we more likely to suffer data loss (that could be mitigated by having caching on) if a power outage were to happen and our BBUs somehow also failed?

I'm assuming you meant "by having caching off" here? using caching has a higher risk for potential data loss/corruption than write-through mode. The same risks exist with and without controller caching for incomplete writes that haven't been sent in entirety from the operating system to the controller though.

I would suggest you test configurations with each possible combination of read-ahead and write caching, and test with your expected I/O workload (try to match read/write ratio, size, and random/sequential mix if possible) to help you decide which config actually provides you with the best performance.

Lastly, keep in mind that write caching is the only feature which brings potential corruption risks. Read-ahead caching may or may not improve performance, but certainly does not pose a risk.

JimNim
  • 2,736
  • 12
  • 23
1

Caching may not be as necessary with SSDs, but it should still be beneficial. The cache on RAID cards is generally RAM level speeds, which is in the thousands of bits per second range. If you are working with SATA II drives, they are 600 Mbit/s interface speed, which will still benifit from the cache. If you have SATA 3 drives at 1200 Mbit/s interface, that may start getting close to some of the lower end cache modules.

As for the issue of data loss and how much that risk plays in, that really depends on a lot of things:

  • If power outages are that common for your area
  • If you don't have sufficient generator or battery backup at a whole system level
  • Newer flash backed write caches (FBWC) versions have capacitors that have longer lifespans.
  • The write activity also play into it, as to how much data throughput the is in cache to be lost.
Cory Knutson
  • 1,866
  • 12
  • 20