We're a software development company and previously outsourced the HW side of business but now we're looking into building our own private cloud. We recently purchased a few servers, one of which is supposed to act as a central storage. The specs are as follows:
- Chassis: CSE-826BE16-R920LPB Motherboard: Supermicro X10SLL-F CPU
- E3-1200v3,Intel C222, 2GbE, Up to 32GB DDR3 ECC,2xSATA3,4xSATA2, 4xDDR3 IPMI
- 1x CPU Intel Xeon E3-1220v3 - 3.1GHz, 8MB cache, 4core, HT, LGA1150, 80W
- 1x heatsink SNK-0046A4 Activ 2U
- 4x 8GB Samsung M391B1G73QH0-CK0 1600MHz DDR3 ECC Unbuffered 2R×8
- 2x SSD 80GB Intel DC S3500 Series 2,5" SATA3, read 340 MB/s, write 100 MB/s
- 10x HDD 2TB Seagate Constellation ES.3 ST2000NM0023 3,5" SAS2, 7200rpm, 128MB
The Seagate drives are set up as RAID 6 array. The SSDs are in RAID 1 and act as maxCache container which we turn on and off using maxView Storage Manager.
My first question - does this configuration make sense for a central storage of a private cloud where we plan to have other 3 compute nodes with two CPUs and lots of RAM on them?
My second question - would a similar configuration with less HDDs make sense for a MySQL database server for a reporting system with many concurrent requests? Or would it make more sense to use the SSDs for MySQL's temp space where it creates temporary tables.
And now for the main question... I tried to measure performance of this server with and without SSD cache (maxCache). The best tool I came across was iozone (iozone -a -g 8G) which produced these charts: http://www.bugweis.com/storage/comparison.zip. I am totally puzzled as it looks like the performance with maxCache is in most cases lower than without it.
I was wondering if iozone is a good way to test for real-life scenarios or if I'm doing this all wrong.