6

I realize that since NFS is not block-level, LVM can't be used directly.

However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server?

Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server).

expansion
The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem.

Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one

From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse:
I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

warren
  • 17,829
  • 23
  • 82
  • 134

6 Answers6

6

GlusterFS is very good for this job, you can also consider Lustre maybe (I have not used that one yet though) GlusterFS is NFS independent software but it would be very easy to move on that. You can also use it for Raid 10 networks which you might need in the future and it is very easy to scale.

Yuri
  • 193
  • 7
5

While remarkably hackish, the route I ultimately ended-up following is this:

  • Using VMware ESXi, add datastores that are NFS mounts (from whereever).
  • Create vdisks on those datastores
  • Add the vdisks to a VM running RHEL (because I'm used to RHEL)
  • Put all of the added volumes into an LVM
  • Export the LVM partition via NFS

Pros:

  • simple
  • cheap
  • easy to replicate
  • with dynamic disk extension via VMware, the space can all be "allocated", but not "used" yet

Cons:

  • requires yet another layer (the hypervisor)
  • if any of the NFS mounts drops, the LVM could become corrupted (an issue faced under any of the potential solutions)
warren
  • 17,829
  • 23
  • 82
  • 134
  • 7
    Reading about this kind of hack makes me feel diiirty. But not necessarily in a bad way. I'll have to keep this in mind someday when I try to do something just as crazy. – Jed Daniels Apr 06 '10 at 19:27
  • @Jed Danies: like I said, it's sketchy as all getout... but it works - and well :) – warren Apr 06 '10 at 19:37
  • 4
    Why not simply create files on the NFS mounts and add them as loopback devices? That way you can skip the VMware bit which adds a lot of overhead. See http://www.mail-archive.com/debian-devel@lists.debian.org/msg220815.html for an explanation on how to do it. – w00t May 07 '10 at 09:12
  • 2
    BTW regarding the NFS mounts dropping, you could create a RAID5 set with dmraid, but you'd need to do some testing on how well that actually holds up. – w00t May 07 '10 at 09:14
  • @w00t - software raid over nfs. interesting idea :) – warren May 07 '10 at 12:13
1

I'm currently using IBM's GPFS on a HPC Linux cluster. Supports multiple direct attached nodes (we are using FibreChannel), others can have a network based block level access to the same volume.

pfo
  • 5,630
  • 23
  • 36
1

You might also be interested in drbd+gfs.

ptman
  • 27,124
  • 2
  • 26
  • 45
1

How about iSCSI? A load of target machines, each presenting a block device to the initiator node? Then, on the iSCSI initiator, use LVM to joint the block devices together, mount this, and then export it as an NFS mountpoint?

Brad
  • 279
  • 2
  • 10
1

Do you mean something like UnionFS (which isn't in any way analogous to LVM that I can think of), or just mounting several filesystems all next to each other (like /mnt/fs1, /mnt/fs2, /mnt/fs3)?

Also, re-exporting NFS mounts (and even NFS mounting filesystems that have filesystems mounted in them locally -- see the nohide option in exports(5)) is an exciting prospect, involving all sorts of corner cases and likely bug hideouts. "Here be monsters", indeed.

womble
  • 95,029
  • 29
  • 173
  • 228
  • the reason I pegged lvm is that I want a bunch of exported mounts (`servera:/mnt/export`, `serverb:/mnt/export`, `serverc:/mnt/export`) to all mount at `/mnt/space` so that my `/mnt/space` on this server (serverx) as one large filesystem – warren Nov 30 '09 at 03:16
  • and yes, I know that re-exporting is generally a Bad Thing.. but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one – warren Nov 30 '09 at 03:17
  • if NFS is required, cannot think of anything other than unionfs (and family). – sybreon Nov 30 '09 at 03:21
  • Yeah, UnionFS sounds like the ticket here. – womble Nov 30 '09 at 03:27
  • from reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? – warren Nov 30 '09 at 03:53
  • more accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then *write* to them - not caring where data goes, *a la* LVM – warren Nov 30 '09 at 03:58
  • I don't think that's easily doable with NFS or UnionFS alone. GlusterFS lets you do something like this. It should even be possible to run the GlusterFS server and client on the same machine, and use each NFS volume as a GlusterFS "brick". Then use the distribute GlusterFS connector to, well, distribute your files among the bricks. – Kamil Kisiel Nov 30 '09 at 05:22
  • @Kamil - GlusterFS *does* look pretty close to what I'm looking for – warren Nov 30 '09 at 05:33
  • So, you don't want NFS, you want a cluster filesystem. – womble Nov 30 '09 at 23:40
  • @womble - perhaps that is what I'm looking for, and if so, I worded my question poorly – warren Dec 01 '09 at 08:09