8

I have a small LAN that has a couple of Linux boxes (Ubuntu 9.10) with NFS shares on them. The boxes are networked with a consumer grade Netgear router (model WGR614V9) and using wired connections.

When I first set up the NFS shares, I noticed that performance was pretty terrible. For example, it would take a few minutes to copy 40 mbs worth data from a mounted NFS share to local disk.

By playing around with the NFS configuration, I was able to get things running reasonably well. The configuration I settled on for the system exporting the share was:

# /etc/exports On the machine exporting the NFS share:
/exprt/dir client.ip (rw,async,no_root_squash,no_subtree_check)

For the NFS client, I have

# /etc/fstab
server.ip:/exprt/dir on /imprt/dir type nfs (rw,noatime,rsize=32768,wsize=32768,timeo=14,intr)

However, while this seems to work reasonably well for me, it still seems to be faster to copy files from one system to the other using scp than it is using NFS.

I thought it would be worth asking what NFS configurations other people might be using on similar network set ups that results in reasonably good performance. I know NFS can be pretty sensitive to things like choice of OS and precise network configuration. But, I suspect the set up I have is pretty common amount other users with small local networks, so it would be useful to hear what configuration works best for them.

Note: I originally asked this question on superuser. But, I got no replies so I suspect it might have been the wrong forum for this type of question.

dmcer
  • 195
  • 1
  • 5

4 Answers4

3

It's pretty standard for scp to be quicker than NFS; there's a lot more overhead and things that need doing for a network filesystem than for a simple machine-to-machine transfer.

womble
  • 95,029
  • 29
  • 173
  • 228
  • 2
    SCP should not be faster at all. SCP has ongoing encryption that takes resources. SCP might have compression turned on, though, which could make a difference if there is enough horsepower. Here is a comparison of throughput from different technologies: http://forums.neurostechnology.com/index.php?topic=9263.0 – Scott Alan Miller Jan 21 '10 at 17:34
  • 1
    @ScottAlanMiller Those benchmarks are for a specific embedded device OSD. If you aren't using the OSD device, I wouldn't trust those benchmarks to be accurate. – rox0r Oct 01 '12 at 18:37
2

NFS should give you about 50% of the underlying disk write performance. If your disk does 100MB/s, then you should be able to do 50MB/s NFS write.

About the mount options : use tcp. udp can give pretty bad results if your network is heavily loaded, or any network device is flaky.

wazoox
  • 6,782
  • 4
  • 30
  • 62
  • 2
    Where did you get this number of 50% ? Benchmark you've done ? Benchmark you read on Web ? – sebthebert Jan 21 '10 at 22:15
  • That's what I regularly get on all the systems I build (several hundreds so far :) – wazoox Jan 22 '10 at 16:03
  • Actually I've been quite pessimistic; You _can_ get 90% of the disk throughput through NFS, but I usually test systems with very fast RAID arrays that easily saturate their network interfaces. – wazoox Jan 22 '10 at 17:44
  • 1
    Can you say more about what might go wrong with UDP? – Norman Ramsey Aug 18 '12 at 15:29
  • UDP doesn't insure correct data transmission. TCP does, at some cost (mainly CPU). A few years ago, most computers weren't fast enough to saturate a GigE interface using TCP/IP, but that's not a problem anymore. Furthermore if your network is heavily loaded, or your NIC very busy, UDP traffic may lose lots of packets thus achieving low performance overall. See ifconfig and nfsstats output for information. – wazoox Aug 19 '12 at 18:08
0

I normally just use SMB and have fine connections. I would like to point you to this site, just incase you have not looked it over.

http://nfs.sourceforge.net/nfs-howto/ar01s05.html

Justin S
  • 350
  • 3
  • 15
  • 1
    NFS gives higher thruput overall; very fast CIFS clients achieve 70MB/s, while fast NFS achieves 120 MB/s (full gigEth bandwidth). – wazoox Jan 20 '10 at 12:49
0

I use rsize=8192,wsize=8192 here, and I have no complaints about performance. I haven't measured it, though.

Teddy
  • 5,134
  • 1
  • 22
  • 27