I have a small LAN that has a couple of Linux boxes (Ubuntu 9.10) with NFS shares on them. The boxes are networked with a consumer grade Netgear router (model WGR614V9) and using wired connections.
When I first set up the NFS shares, I noticed that performance was pretty terrible. For example, it would take a few minutes to copy 40 mbs worth data from a mounted NFS share to local disk.
By playing around with the NFS configuration, I was able to get things running reasonably well. The configuration I settled on for the system exporting the share was:
# /etc/exports On the machine exporting the NFS share:
/exprt/dir client.ip (rw,async,no_root_squash,no_subtree_check)
For the NFS client, I have
# /etc/fstab
server.ip:/exprt/dir on /imprt/dir type nfs (rw,noatime,rsize=32768,wsize=32768,timeo=14,intr)
However, while this seems to work reasonably well for me, it still seems to be faster to copy files from one system to the other using scp
than it is using NFS.
I thought it would be worth asking what NFS configurations other people might be using on similar network set ups that results in reasonably good performance. I know NFS can be pretty sensitive to things like choice of OS and precise network configuration. But, I suspect the set up I have is pretty common amount other users with small local networks, so it would be useful to hear what configuration works best for them.
Note: I originally asked this question on superuser. But, I got no replies so I suspect it might have been the wrong forum for this type of question.