2

I am planning to deploy a cost efficient yet performant SAN/NAS setup for our main office. Use cases: storage for a 20-30 user VDI deployment, file server, primary backup location. Required usable capacity = 10TB.

The storage software counterpart is yet to be considered. Right now I am researching the possible configurations of underlying storage hardware. I've compared the prices for 10K RPM SAS based RAID10 setup (10x2TB HDDs) and SATA SSD RAID5 configuration (7x 1.6TB SSDs). Interestingly, SSD setup comes 20% cheaper if read-intensive drives are used and costs 10% more if I choose mix-use drives. Which means that all-flash RAID5 looks like a feasible option, at least on paper.

However, a long time ago, I've experienced TONS of troubles with RAID5 in a "good old" 5x 70GB SCSI HDD configuration. Even now, that thing still gives me nightmares. Moreover, I've overlooked some threads like This and This and it looks like some people are seriously convinced that my "all-flash RAID5" plan is not going to work.

So, the question is: do you guys have any good reading on this topic or could you share your personal experience with RAID5 SSD setups? Many thanks in advance!

Laucktba
  • 295
  • 2
  • 8
  • I'm sorry, but do you have budget details, any information on the virtualization technology in place or any preferences on vendor? Right now, it's not easy to answer your question. – ewwhite Mar 17 '17 at 19:39
  • Does it have to be one contiguous storage pool? Can't you use SSDs for VDIs, maybe file server, and HDDs for backups, which don't require such speeds? Also as SATA SSDs are an option, can't you go with SATA HDDs as well? Those might be cheaper. In fact, if you went with 7200 RPM HDDs, you could build a RAID1 array for a lot cheaper for "slow" storage, and get some ludicrously fast write-intensive SSDs for the fast storage. – pilsetnieks Mar 17 '17 at 21:28
  • Also consider using at least RAID6 over RAID5 if you go with RAID5 to help prevent dataloss. – sleepyweasel Mar 18 '17 at 05:19
  • @ewwhite budget for the whole project is $10K. We are heavily Windows-oriented, so I don't really want to use a ZFS-based solution. – Laucktba Mar 20 '17 at 08:59
  • @pilsetnieks We are going to use enterprise hardware (for support / SLA reasons), so the cheap SATA HDD option is not going to work for us. NL SAS 7.2K RPM RAID10 will cost the same as SSD RAID5 – Laucktba Mar 20 '17 at 09:12
  • Friends don't let friends use R5, it's 2017 – Chopper3 Mar 20 '17 at 15:21

4 Answers4

5

From my experience for that kind of production, I would recommend going with RAID5 SSDs since the implementation efficiently utilizes storage being still performant. Also, the setup minimizes RAID rebuild since fast drives are used.

https://www.starwindsoftware.com/blog/raid-5-was-great-until-high-capacity-hdds-came-into-play-but-ssds-restored-its-former-glory-2

For the project, go with hardware RAID in case your production is more about 2-3 hosts and software RAID for 4+ nodes clusters.

Mr. Raspberry
  • 3,878
  • 12
  • 32
  • 1
    I'm now testing SSD RAID5 configuration and can totally approve it from the performance standpoint. I'd probably stick with this option and will proceed with hardware order. – Laucktba Mar 24 '17 at 14:27
0

I agree with @pming on looking to use ZFS as a filesystem. It'll give you some good options you might be interested in. ie: deduplication, various compression options, snapshots, replication (to another pool or system for backups), ... Another thing to consider with ZFS would be using larger non-SSD drives and adding a SSD for a read or write cache.

Also consider using at least RAID6 (raidz2 in ZFS speak) over RAID5 (raidz in ZFS speak) if you go with RAID5 to help prevent dataloss.

Some of your comments hint that you plan on building up some kind of home grown system to handle your office's needs. But also slightly hint that you may buy an array or solution from a vendor. -- You may want to clarify.

Nexenta offers a good solution to build a storage system utilizing ZFS.

sleepyweasel
  • 171
  • 6
0

I use ZFS for this, on a similar amount of space. Yup, I use it for VDI/ESX. No, I don't think you should use raid6, because it's too much space overhead; raid5 is enough if you'll use 5-disk vdevs in spans (reducing cold data issue to its minimum), thus effectively using raid50 configurations, since ZFS always stripes its data when possible. If you're cautious about cold data, use scrubbing periodically (in fact, use it even if your're not cautious). Although cold data is also a problem with raid6.

Also keep in mind that there's much overhead in ZFS, you have to keep the pools no more than 85% filled (or probably use dedicated log devices, which partially eliminate this problem). Also, if you intend to use zvols, keep in mind that you have to use volblocksize at least 8 times bigger than the sector size, and this requirement isn't met automatically with newer AF drives (which you will probably get for this on your amount of space).

Besides that SSD just rocks on ZFS/raid5.

P.S. Avoid Sandisk at all costs.

user
  • 4,267
  • 4
  • 32
  • 70
drookie
  • 8,051
  • 1
  • 17
  • 27
-1

I recommend to use ZFS for this. If you need a single box for different use cases, ZFS enables you to build different storage pools. You could create a mirrored stripe zpool (similar to RAID 10) with 4 SSDs for your VDI, use some larger 10k or 15k drives for file services, and some even larger 7.2k drives for a backup pool.

For example:

4 x 400 GB SSD mirrored stripe = 0.8 TB for VDI

5 x 1 TB 10K SAS raidz1 (similar to RAID5) = 4 TB for SMB, AFP, NFS, whatever

5 x 2 TB 7.2K SATA / SAS raidz1 = 8 TB for Backup (These are rather cheap to compared to fast, enterprise grade SSDs, and perhaps you don't need such speeds for Backup?)

This really depends on how much capacity you need for each of these use cases.

pming
  • 19
  • 1
  • 4
  • 1
    I plan to use a good old hardware RAID controller. Something like Dell H730 should work well with SSD drives (at least, this is what they state in it's specs). As I mentioned before, we have a "Windows-heavy" environment and I don't want to spawn a Linux-based SAN here. We decided to stick with enterprise-grade hardware (most probably Dell R630/R730), that's why cheap high-capacity SATA HDDs are not going to work in this case :( – Laucktba Mar 20 '17 at 09:06
  • I was thinking about if I also should recommend an OS. Actually, I would've recommended FreeBSD for this (which is actually not Linux, it's a BSD derivative). If you're not comfortable with that, you could also use FreeNAS, which has a very easy to use web interface and support for iSCSI, SMB (for Windows Clients) and AD integration. If you need an OS to handle storage for you, FreeNAS might be the way to go. However, in that case you may want to use an HBA instead of a RAID controller. I can't really make recommendations for using Windows for this purpose, because lack of experience. – pming Mar 20 '17 at 09:26