4

When buying a FBWC storage controller such as HP's P420, you can choose either a 512MB, 1G, or 2G capacity. What differences do these capacity sizes provide? Is there a metric I can use to choose between them?

Is it related to an 'all-the-time' throughput performance statistic or is it more related to the 'time-of-failure' file size statistic?

Nicholas
  • 245
  • 1
  • 4
  • 12

3 Answers3

4

There is definitely a diminishing-returns thing going on with higher capacities. The read-cache will contain the hottest blocks being accessed, which in many cases is probably the blocks associated with your filesystem metadata. If your entire metadata can fit into the read cache, overall filesystem performance will be noticeably snappier. The size of metadata depends on the filesystem used, though; size appropriately there.

Once you've exceeded the metadata size, more cache has less immediate returns. It definitely improves performance, but the metrics are kind of complex and based on I/O rates.

One thing these controllers do is if set in write-back mode (data is committed once it is in the cache) is reorder writes so they go to disk in a more-sequential way, and thus increase perceived performance of the system. The more write you push per second, the more write-cache it can use.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • Can I interpret from your answer that in terms of read-cache, a larger cache would make a big difference for a webserver, but little difference to a business file or mail server? – Nicholas Jun 14 '12 at 12:06
  • @Nicolas For a web-server, system block-cache has a big impact and can be sized larger a lot easier, with the RAID card cache being a second tier. RAID cache is rarely over 2GB, where system RAM can be quite larger then that. – sysadmin1138 Jun 14 '12 at 16:28
3

It depends on your write patterns. I typically bias my servers towards a 75:25 write:read ratio... But 512MB and 1GB have been good enough to buffer write activity for my applications. You'll be flushing to disk often enough that size may not matter for most apps. Having the extra cache may be useful for situations where you could benefit from allocating more to the read cache. But this all depends on your activity, especially considering OS and filesystem caching...

ewwhite
  • 194,921
  • 91
  • 434
  • 799
1

More capacity means more cache - the more data can read server from cache instead of getting data from disks, this is improving performance. It's fast, effective and safe.

To know the metric - you have to know what storage blocks are often used when random read-write occurs (mostly, but it may depend on data bandwidth too). If you are accessing terabytes with constant gbps bandwidth - it may not have effects, even in case of random writes.

It's not only about the read, depending on your configuration it might be write cache too, but if write cache mode is not battery-protected - you'll lose non-written, active data.

HDD's also have a cache memory, which can be disabled too (ex. write-through mode).

GioMac
  • 4,444
  • 3
  • 24
  • 41