1

I have two servers that I plan to use for storage. Each of them has a few SATA disks directly attached. I want the storage to be available even if one of the storage servers is down (preferably the clients wouldn't even notice that the fail-over, although I'm not sure if this is possible). The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed.

I found this guide, Installing and Configuring Openfiler with DRBD and Heartbeat, which apparently does the thing I want. It relies on three components, Openfiler, DRBD, and Heartbeat, and all three of them need to be configured separately. I'm wondering are there simpler solutions?

Is using DRBD+Heartbeat the best practice for a situation like mine? I'm also interested to know if there are alternatives that don't depend on DRBD.

Basil
  • 8,811
  • 3
  • 37
  • 73
netvope
  • 2,113
  • 5
  • 25
  • 35
  • Are you using shared storage or does each server have its own disk? What service are you providing to other machines? – MikeyB Feb 08 '11 at 08:26
  • @MikeyB Thanks. I've clarified the question accordingly. – netvope Feb 08 '11 at 08:42
  • "The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed." - does this extend as far as deploying a Windows environment, as it has a distributed file system that does exactly what you're after. – Mark Henderson Feb 08 '11 at 08:49
  • @Mark - Yea, please post it as an answer, and I'll figure out if it works for me :=) Thanks – netvope Feb 08 '11 at 09:16

3 Answers3

3

GlusterFS may be another option: http://www.gluster.org/ Gluster was designed from the ground-up to be a distributed filesystem.

slashdot
  • 651
  • 5
  • 7
  • GlusterFS looks promising too. High availability set up guides: (1) http://www.gluster.com/community/documentation/index.php/Simple_High_Availability_Storage_with_GlusterFS_2.0 (2) http://www.howtoforge.com/high-availability-storage-cluster-with-glusterfs-on-ubuntu – netvope Feb 12 '11 at 04:06
2

Windows Server has this functionality via a feature called DFS - Distributed File System. Basically you create a namespace inside your domain and you access it like you would a traditional share.

For example, \\domain.local\ShareName\

You put your servers into the namespace and configure DFS replication between them. Then if one host goes down, its data is still present on other hosts, and the transition is seamless to end users as they just continue to access the namespace, rather than the individual servers.

Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
  • 1
    Negatives: * Small replication delay and * on fail file connections are broken (i.e. files must be reopened). Works likea charm in 99.9% of the business cases, i.e. one CAN live with these negatives. – TomTom Feb 10 '11 at 15:20
1

Here's a different idea. You might want to check out FreeBSD/FreeNAS/Solaris (if you dare) and make use of the ZFS filesystem. It is possible to have a zpool span across multiple servers. Now if you set up the storage as a zpool it should be relatively safe and highly available.

You could check out Google or these posts to get you started:

alanc
  • 1,500
  • 9
  • 12
Stephan
  • 417
  • 1
  • 5
  • 13
  • Please, let me know why you think this answer is so bad you had to rate it down. I am certainly open to criticism. – Stephan Feb 08 '11 at 10:58