3

Does anyone here have any experience with OpenFiler in a large production environment they would like to share? We have about 3 TB of document images and databases and expect to grow rapidly in the near future (perhaps 10 or more TB).

Clearification

We will most likely be connecting to the SAN via ISCSI over GB Ethernet from web, database and FTP servers.

Jim Blizard
  • 605
  • 1
  • 5
  • 12
  • Mostly out of curiosity, how will you be attaching the storage to the openfiler? – Matt Simmons Jun 04 '09 at 14:29
  • Also see - http://stackoverflow.com/questions/534667/does-anyone-uses-openfiler-in-production (probably doesn't belong on stack overflow) – Bob Jun 04 '09 at 15:14
  • Make sure when you set up openfiler you do it with all new drives which do not have any data. Openfiler uses LVM and doesn't play nicely with drives with existing partitions on them. It will probably be easiest to setup openfiler with nothing on it, then migrate your documents and images later. – Bob Jun 06 '09 at 02:16

7 Answers7

7

Outside of OpenFiler, your options would be other NAS OSes (FreeNAS, NASLite...), dedicated NAS appliances running custom software or completely rolling your own with a mainstream linux distribution (or Windows if you really want).

I have been experimenting with FreeNAS and OpenFiler for the past month or so. I am putting my eggs in the OpenFiler basket. I haven't been running in a production environment, but all the research I have done points to OpenFiler as being the Enterprise solution compared to other NAS OSes and NAS Appliances. In pretty much all performance reviews I have seen it out performs those solutions. This is of course based on the hardware you are running from and how you tune the server. Also, based on my research, anyone who has purchased the support package has mentioned their support is very exceptional.

You can also see someones comparison of FreeNAS to OpenFiler here to get some ideas of what sort of performance they saw with OpenFiler.

http://www.scribd.com/doc/29643/OpenFiler-vs-FreeNAS

I have not compared OpenFiler to custom solutions though. Some people prefer to support and maintain a more common linux distribution like ubuntu and expose NAS features manually. I think this would be the preferred solution if you are running on uncommon hardware. That is something that I am not interested in doing. I'd imagine you could get as good or possibly better performance with a custom solution depending on your hardware and the support for it.

So as long as you have got a decent hardware setup and a support package, I'd say OpenFiler would be a success in production. Just be sure to check the hardware compatibility page.

Additional Edit: Make sure when you set up openfiler you do it with all new drives which do not have any data. Openfiler uses LVM and doesn't play nicely with drives with existing partitions on them. It will probably be easiest to setup openfiler with nothing on it, then migrate your documents and images later.

Bob
  • 2,917
  • 5
  • 28
  • 32
7

Just a couple of quick notes after spending a fair amount of time with Openfiler in a cluster setup and 8Tb of storage:

  1. The 2Tb limit is most likely because you're using MBR (Master Boot Record - Openfiler called it MSDOS, even though that isn't really correct) partitioning, rather than GPT (GUID Partition Table). MBR is limited to 2Tb partitions. See http://en.wikipedia.org/wiki/GUID_Partition_Table

  2. Watch out for Openfiler's iSCSI implementation when using VMWare's ESX/ESXi (or vSphere, which uses ESX/ESXi). Under heavy load, Openfiler's iSCSI module will choke (we've run into this a bunch of times - Openfiler will take volumes offline if it experiences too many errors from the iSCSI module it uses). The Openfiler team is working to convert the existing iSCSI module (EIT) to a different module that is essentially a rewritten version of the existing one, called SCST. A Google search for "Openfiler cmd_abort" will tell you all about the current problems.

  3. If you need e-mails from your RAID controller to report failing disks (and you do), and you don't want to mess around with cramming management stuff into rPath (Openfiler's Linux choice), use a RAID controller that has a LAN port on it, such as a number of the Areca cards.

  4. For a stable, free iSCSI solution, Open-e offers a lite version of their DSS V6, which is fairly similar to Openfiler under the hood, but uses SCST. It's clustering capability isn't as capable as Openfiler's is, but it's far easier to set up, and the management interface is much easier to work with. The catch is that the free version limits you to 2Tb of storage and no cluster capability - you have to pay for a license if you want clustering or more than 2Tb. Their product is VMWare certified, though.

  5. Openfiler's management interface has a few bugs that we encountered. There are situations when creating volumes will result in bizarre sizes that you didn't ask for, and we had an issue when we were exploring the NIC bonding options and had Openfile apply our settings even though we clicked cancel (which resulted in us getting disconnected from the server and having to sort it out from the console via the command line.

  6. If you want speed (who doesn't?), find a way to use multiple NICs and MPIO to multiple your gigabit speeds. If your SAN will be talking mostly to one machine (as is the case if you're using ESX in most cases), Do NOT use link aggregation. It's a common misconception that 802.3ad results in speed - it doesn't unless there are multiple machines pulling data at the same time. If only one machine is accessing the SAN over an aggregated link, you'll only see the speed of ONE of the links (ie: 1Gbps).

Hmm...That wasn't such a quick response after all. :)

Paul
  • 71
  • 1
  • 1
1

No real details (because they didn't run into significant issues as far as I know), but it held up well according to a buddy who ran it for a while on their Windows/Mac network. I think they served a couple dozen terabytes of video and documents out of it.

Karl Katzke
  • 2,596
  • 1
  • 21
  • 24
1

I have a test server connected to a Promise vTrak 15200 (piece of junk) via iSCSI, and encountered problems using volumes larger than 2TB. I'm actually not sure if this is a limitation of Openfiler or the vTrak - but my solution was to create several arrays on the vTrak and then stripe them together with software raid on the Openfiler.

Other than that - Openfiler seemed to work great (it was our Backup-to-disk storage a month ago when our tape library died). It integrated nicely into Active Directory without much hassle aswell.

pauska
  • 19,532
  • 4
  • 55
  • 75
  • The 2TB limit is from your card, not openfiler. I have a Highpoint (2313?) which also has a 2TB limit. I recommend you expose the attached disks as individual disks and create a software raid. This will allow for volumes greater than 2TB – J Sidhu Nov 21 '09 at 04:44
  • Thanks, good to get it confirmed that someone else also have the same issue - only that it's not my card but the controller on the external array. Can't expose them as individual disks either. – pauska Nov 23 '09 at 08:00
  • Similar issue with an old RMTrak + Compaq card here, we have 2Tb raw storage available but can only expose two volumes of half the storage each, due to limitations on the card + controller. You can sew the storage back together as some form of dynamic volume (depending on the OS) but this isn't possible in all scenarios, or desirable from a performance PoV. – Chris Thorpe Jan 21 '10 at 22:00
0

We have Dell PowerEdge servers and are testing OpenFiler 2.3 on them. The problem we're running into is getting some method of monitoring the disk health on the machines, since Dell doesn't provide a Dell Open Manage installation method for rPath (the distro OF is built on). I'm in the middle of trying to interface with the PERC5/i controller via the command line so it can at least email errors to us when a drive has issues.

Hopefully Dell provides an OMSA version for Linux other than RedHat and SuSE soon.

colemanm
  • 659
  • 5
  • 10
  • 25
0

I do not yet, but i certainly plan to. If you plan on being able to save yourself in the future keep this information in mind.

http://www.howtoforge.com/installing-and-configuring-openfiler-with-drbd-and-heartbeat

I am working on a HA esx cluster at home for training.

XTZ
  • 183
  • 1
  • 1
  • 10
0

We use openfiler and host a large amounts of data (5TB+) including a Xen DomU of the websites with 1kk+ hits/daily almost without any problems. If you need iSCSI, there's probably no stable free alternatives.

disserman
  • 1,850
  • 2
  • 17
  • 35