11

I just recently bought a new server an HP DL380 G6. I replaced the stock smart array p410 controller with an LSI 9211-8i.

My plan is use ZFS as the underlying storage for XEN which will run on the same baremetal.

I have been told that you can use SATA disks with the smart array controllers but because consumer drives lack TLER, CCTL and ERC its not recommended. Is this the case?

I was wondering if using the LSI controller in JBOD (RAID passthrough mode) does the kind of disks I use really have as much of an impact as they would with the smart array controller?

I am aware that trying to use a RAID system not backed by a write cache for virtualization is not good for performance. But I was conisdering adding an SSD for ZFS. Would that make any difference?

I reason I am so obsessed with using ZFS is for dedup and compression. I don't think the smart array controller can do any of those features.

ianc1215
  • 1,965
  • 7
  • 34
  • 55
  • 1
    Using consumer SATA dives on a server is never recommended. But I suspect the reasons are not necessarily driven by reliability statistics. There is a growing amount if research available that backs that statement up, so go right ahead and use consumer disks if you are prepared to take the risk. – hookenz Jan 06 '14 at 03:13
  • See [**ZFS best practices with hardware RAID**](http://serverfault.com/questions/545252/zfs-best-practices-with-hardware-raid/545261#545261). You can run ZFS on top of a logical drive provided by the Smart Array controller. In the setup you describe, an SSD for ZFS probably won't help much. Compression on ZFS is great. [Deduplication on ZFS is not](http://serverfault.com/questions/403353/backup-storage-server-with-zfs/403392#403392). – ewwhite Jan 06 '14 at 15:07

2 Answers2

13

Please don't do this.

If you're going to run ZFS on Linux, do it bare metal without a virtualization layer. All-on-one virtualization and ZFS solutions are cute, but it's not worth the effort in production.

As far as drives are concerned, you can use SATA disks on an HP Smart Array controller as well as the LSI 9211-8i controller. In a ZFS configuration, a failure of the SATA disks may have an adverse effect on the system when running with the LSI controller.

Using consumer disks is just what it is. Go into it knowing the caveats.


Edit:

So you're looking to run a ZFS filesystem to provide storage for local virtual machines?

The HP Smart Array P410 is a good RAID controller. Most importantly, yours likely has a battery-backed or flash-backed write cache. That's important for performance purposes. Achieving the same thing properly on ZFS (using the ZIL) is far more costly and requires more engineering thought. ZFS may not offer you much over a traditional filesystem like XFS for this particular purpose.

This would be different if you were using ZFS on a dedicated server to provide storage to other hypervisors.

See: ZFS best practices with hardware RAID

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • I don't think my question was clear. I'm not running ZFS in a virtual machine. I'm running ZFS on the bare metal. It will provide storage for my virtual machines. As for the raid card I was told that using an HP raid card "hides" the bare metal drives and makes ZFS less effective. Is this the case? – ianc1215 Jan 06 '14 at 02:16
  • @Solignis See my edit above. – ewwhite Jan 06 '14 at 02:25
  • Yes local storage to a xen server. The reason I was looking to use to lsi controller is it supports jbod. The smart array does not. – ianc1215 Jan 06 '14 at 02:30
  • 1
    @Solignis Again, the LSI controller and ZFS offers you no benefit for your use case. You won't have write caching, which is *BAD* for virtualization. You'll need to use software RAID to boot the system and likely dedicate physical disks for booting. It's really not worth it. You could run ZFS atop your hardware RAID, using a single device, but you'd really need to have a specific reason for needing ZFS. See this question: http://serverfault.com/questions/545252/zfs-best-practices-with-hardware-raid/545261#545261 – ewwhite Jan 06 '14 at 02:37
  • I agree with ewwhite. ZFS in Dom0 provides no real benefit and is likely to badly hurt performance. – hookenz Jan 06 '14 at 03:15
6

Using Consumer grade disks in server grade HW is possible though not recomended if you are going to use the support from the vendor. They will bitch like hell why you replaced the perfectly supported drives with unsuported such. Aside from that there is no problem to do it and backblaze proved it (http://www.getoto.net/noise/2013/11/12/how-long-do-disk-drives-last/).

As for the drive selection Look for drives that support NCQ and you should be mostly fine.

Using the drives in JBOD mode is asking for trouble. Quite possibly the LSI controller will show you just one big disk (and you do not want that). What you need is passtrough mode (basically use the controller as extender for the port count. Check if this is the case.

ZFS on linux: not a stelar idea. It is still not stable enough though it is usable. Dedup on zfs: Quite a big no if you are planning to run serious load on the machine. It tends to eat lots of ram (on the range of 2-4 G for every 200-500 GB of deduped data). It might have improved but haven't checked soon. Compression might be a good fit though it depends on the data.

SSD: Yes it will make quite a nice difference. There are several areas (ZIL already was mentioned above) that will improve quite a lot if placed on a separate disk (and if on SSD even more).

If you are adamant on the ZFS i would suggest using either solaris/nexenta/opensolaris or BSD for the storage host and then export it to the XEN hosts over iscsi/ata-over-eternet/etc.

I strongly suggest to at least skim over backblaze blog and look for the ideas they are using in the construction of their POD's

zeridon
  • 760
  • 3
  • 6
  • 1
    ZFS on Linux is quite stable, but there's leniency in ZFS best-practices. You still need to plan and engineer accordingly. Hardware RAID controllers are more forgiving. – ewwhite Jan 06 '14 at 15:41
  • As much I want to use ZFS for its features. All of the points made are very good. If I had another server I would setup an iSCSI target but my budget for personal equipment is low since this is not related to a business. Thanks for the insight. – ianc1215 Jan 06 '14 at 16:28
  • I have a system with the exact controller mentioned by the OP (LSI 9211-8i SAS HBA), with IR firmware (I meant to re-flash it to IT firmware, but never got around to that, and it works fine anyway). With no particular configuration, it acts as just a plain HBA and passes the individual disks through to the OS. It can be *configured* to present RAID volumes, but it doesn't do so without being told. – user Nov 04 '15 at 12:22