4

I'm planning to implement a 'poor man's storage' using Openfiler or NexentaStor CE.

I need the filer solution to provide iSCSI target and CIFS sharing abilities. The iSCSI target and CIFS share will later be mounted as XenServer Storage Repositories.

I might also need replication ability, but no need for RAID since the filer will be installed on an 'elderly' server that already has an honest-to-goodness hardware RAID 1+0.

Between the two solutions above, which one do you recommend, and why?

Or, do you have in mind another solution besides Openfiler and NexentaStor CE?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
pepoluan
  • 4,918
  • 3
  • 43
  • 71
  • 1
    May I ask why FreeNAS is out the picture? Their latest release is quite popular.. – pauska Jun 24 '11 at 11:30
  • @pauska no specific reason other than I forgot about FreeNAS :) Any comparison between FreeNAS and Openfiler and/or NexentaStor? – pepoluan Jun 24 '11 at 12:05

6 Answers6

8

A quick note about Openfiler (and I hear NexentaStor is the same) when used as an iSCSI target - you are almost guaranteed to see timeout errors and targets dropping offline, requiring a reboot of the server to correct. This usually happens under heavy load (though I've seen it happen under light loads, too).

We went through hell with Openfiler using iSCSI for several weeks while we tried to nail down the problem. The issue isn't really Openfiler itself, but the iSCSI module it uses (EIT). There was some talk about them converting to SCST, which doesn't have the problem, but so far not a whole lot has happened. A Google search for "Openfiler cmd_abort" will tell you all about the current problems.

What we ended up doing is dumping iSCSI and just using NFS with Openfiler, and everything has been fine since - but since you mentioned iSCSI, I thought I'd mention the problem before you built everything and last had nothing but problems.

Also, Openfiler's management interface has a couple of interesting bugs. We had continuous issues managing large volumes (4Tb), with the interface not letting us delete volumes, or not letting us recreate them after we finally managed to delete them. It appeared that the interface wasn't always cleaning up after itself, leaving things behind. When we later tried to work with the volume, these left behind bits would result in an error behind the scenes when the script issued new commands to the OS, and the web interface would simply refuse to do what it was asked to do (and it didn't report the error it was experiencing when issuing the commands, so you have no idea why it's not working).

In another experience, we wandered into the NIC setup to look at bonding interfaces. We walked through the setup for it, just to take a look, then clicked cancel - Openfiler applied the changes anyway, disconnecting us from the server in the process. We had to jump to the server's console and log in locally, then edit configuration files by hand to regain access to the server.

All in all, not a great experience with Openfiler - and with the project being virtually abandoned since 2009, I'd be inclined to avoid it, or be prepared to fight with it to get it set up, then not touch it for fear of breaking something and losing your data.

Paul
  • 96
  • 1
  • 2
  • 1
    IET is known to die under heavy load, this is why RHEL uses tgt instead. Nexenta however, doesn't use IET afaik. – dyasny Jan 07 '12 at 13:41
  • OF is unlikely to move away from EIT to SCST because they make their money by selling SCST as an upgrade for people who invested into an OpenFiler project and then realized that it isn't reliable. So I see them volunteering a change as very unlikely. – Scott Alan Miller Dec 09 '12 at 06:58
5

There seems to be more momentum behind NexentaStor. You haven't provided much detail on the hardware arrangement other than it being old. What are the CPU/RAM numbers? However, one reason I'd go the NexentaStor route is the presence of inline compression of its storage volumes. Your setup probably isn't suitable for the deduplication features, but the compression comes with a negligible penalty on ZFS-based storage systems.

Nexenta is reasonably-easy to manage and the GUI can access most of the day-to-day features. Can you provide more detail on the disk setup?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Well, it's an HP DL180 with 8 SATA drives (500 GB each). It has a Xeon processor, but I forgot which model exactly. – pepoluan Jun 24 '11 at 12:04
  • The DL180 can work well for ZFS if you replace the existing Smart Array controller with a pure SAS HBA like an LSI 9211-8i. See: http://serverfault.com/questions/84043/zfs-sas-sata-controller-recommendations - It's certainly more flexible than using the hardware RAID arrangement. – ewwhite Jun 24 '11 at 12:26
  • Hmmm... I see. I kind of like the DL180's controller, though. Besides, if I need to buy another HBA, well... let's say that the server is no longer a 'poor man's storage' ;) (read that as: management will *not* *ever* grant me a budget for that). – pepoluan Jun 24 '11 at 12:28
  • 1
    As noted above, ZFS-based solutions like full access to the RAW disks. HP Smart Array controllers don't allow that (unless you create multiple RAID 0 logical drives), and cause problems during drive replacement and disk failures. – ewwhite Jun 24 '11 at 12:35
  • 1
    Understood. I still have qualms making the drives a honkin'-big RAID 0 phantom. So I guess if I want to ever try ZFS, I'll just expose every drive as an independent drive. Hmmm... need to test hotswapping if SmartArray is not managing the drives as RAID array. Thanks for the explanation! – pepoluan Jun 25 '11 at 03:46
  • The hotswapping won't work. – ewwhite Jun 25 '11 at 05:29
3
  • If you plan to continue using your Hardware RAID, you should be aware that ZFS (on Nexenta) really needs direct access to the single disks in order to be fully operational.
  • CIFS is somewhat limited on Nexenta CE as it currently can't work with LDAP users for access control, this only works with NFS. For CIFS, you need to create local users on the Nexenta appliance. For me this is a major drawback, but the documentation claims that there is ongoing work to fix this. I am not sure if it would work when bound to an AD domain though, but maybe it's totally irrelevant for you anyway.
  • Snapshots on ZFS are really nice. You can create unlimited amounts of snapshots, which have basically no overhead at all. Openfiler works with Linux LMV as far as I know, so I guess it will have the LVM-typical quite heavy performance penalty when doing snapshots.
  • For Xen storage, the deduplication offered by Nextena could come in very handy, but this needs loads of RAM.
  • I don't want to spread FUD, but the future of Nexenta is still a bit unclear for me with Oracle controlling ZFS and Solaris.
Sven
  • 97,248
  • 13
  • 177
  • 225
  • Well, I don't plan to bound the filer to an AD domain, but thanks for the reminder. Point #1 is kind of a dealbreaker for me though; I like the DL180's hotswap + hotrebuild ability. – pepoluan Jun 24 '11 at 12:27
  • Generally, you can do this with ZFS as well, just like any other software RAID. It just needs direct access to the disk for some advanced features like checksumming and recovery. I am not familiar with the Smart Array controllers, but maybe it's possible to export the eight disks as eight 1 disk JBODS, or alternatively replace the controller as @ewwithe suggested. – Sven Jun 24 '11 at 12:34
  • Replacing the controller will mean I have to ask more moolah from the management, and I'm 100% sure I'll get 0% approval. Making them RAID 0... I'm not comfortable with the idea. Like I posted as a comment to @ewwhite, I need to experiment first with the SmartArray + ZFS: If I expose the drives as independent drives, will the hotplug still work. Anyways, thanks for the information! – pepoluan Jun 25 '11 at 03:48
2

I personally use Solaris 11 Express with my VMware cluster presenting the various ZFS pools to VMware via NFS.

I'm quite comfortable with the Solaris 11 console and prefer the direct ZFS control this presents.

I tried Openfiler (no ZFS), NexentaStore CE (limited web interface for some functions), FreeNAS (ZFS version was too old) and OpenSolaris (current fork situation needs to settle down) before making the decision to just use Solaris 11 natively.

Asinine Monkey
  • 387
  • 2
  • 8
  • Sound interesting. Is there any limitation on Solaris 11 Express? – pepoluan Jun 25 '11 at 03:49
  • 1
    Unlike OpenSolaris and FreeBSD, Solaris 11 Express is free only for evaluation. You require a license for production use (http://www.c0t0d0s0.org/archives/7033-Solaris-11-Express-for-production-use.html). Then again NexentaStore CE is limited to 18TB for "free" and other ZFS supporting operating systems will always be behind whatever ZFS version is shipped with an official Solaris release. Go with what you feel comfortable with using. – Asinine Monkey Jun 25 '11 at 20:02
2

If you plan on sticking with the hardware RAID controller, go with OpenFiler. If you can invest in a JBOD controller, go with NexentaStor simply because of the better features the ZFS filesystem has to offer over Linux's LVM+EXT4/XFS/ReiserFS/etc. I would make the ZFS investment simply because it would eliminate the possibility of data corruption. But if you're just testing stuff & don't want to spend a penny, OpenFiler is a good distro.

churnd
  • 3,977
  • 5
  • 33
  • 41
0

I should point out, NexentaStor CE is not free for business use. I've spoken directly to Nexenta about this. Up to 18TB is free for hobbyist and lab use, but production is always paid for, no matter what.

Check our NAS4Free and FreeNAS for cheap alternatives to OpenFiler and NexentaStor that don't iSCSI issues or high cost associated with them.

And if you don't need the web interface, just use Linux or FreeBSD on its own.

http://www.smbitjournal.com/2012/04/choosing-an-open-storage-operating-system/

But I agree with everyone that NFS is the best option regardless.

  • Please read our [faq] in particular [May I promote products or websites I am affiliated with here?](http://serverfault.com/faq#promotion). – user9517 Dec 09 '12 at 09:01