LSI offers their CacheCade storage tiering technology, which allows SSD devices to be used as read and write caches to augment traditional RAID arrays.
Other vendors have adopted similar technologies; HP SmartArray controllers have their SmartCache. Adaptec has MaxCache... Not to mention a number of software-based acceleration tools (sTec EnhanceIO, Velobit, FusionIO ioTurbine, Intel CAS, Facebook flashcache?).
Coming from a ZFS background, I make use of different types of SSDs to handle read caching (L2ARC) and write caching (ZIL) duties. Different traits are needed for their respective workloads; Low-latency and endurance for write caching. High capacity for read.
- Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?
- When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.
- Do writes go straight to SSD or do they hit the controller's cache first?
- How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?
- What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's really working?
I'm interested in opinions and feedback on the LSI solution. Any caveats? Tips?