0

I've got a server with several directories shared in exports. One of these exported directories holds home directories, others hold some random data. Is it possible somehow to prioritize i/o coming from network clients to "home" share over the i/o coming to other exports on the server? Now when there is heavy i/o on the "data" shares the "home" share becomes less responsive, and I want the "home" requests always have priority over all other NFS exported paths.

abbot
  • 213
  • 1
  • 2
  • 9
  • where are the bottle necks. I don't think you can control QOS per mount point. However, you could make sure that different mount points are located on different physical disks. Add ram to the server (as much as you can) so caching becomes effective for hot files. Move your data shares to a new machine, or virtual machine and throttle network resources or virtual resources to that machine. – The Unix Janitor Jul 18 '12 at 11:48
  • The bottle neck is currently in the network. Mounts are already on different raid arrays, and "other" shares receive mostly read requests. As I can see from the network traffic monitor, the 1gb link is almost saturated in the direction from the server to clients, but has enough capacity in the other direction, so if I could only make the server reply to "home" mount requests with a higher priority then to other mounts, this would solve the problem... – abbot Jul 18 '12 at 13:17
  • You could always goto 10G ethernet, or add ethernet adapter to your machine and make then in a bundle. depending on the server you should be able to 4x1G cards in there, and team them to one adapter. giving you 8G tx/rx. Which should alleviate some problems. Else divide into a virtual machine and do QOS based on each servers ip address. – The Unix Janitor Jul 18 '12 at 14:02
  • you can't do this, unless a nfs proxy exists, i never seen one myself in production. It's really hard to differentiate different mount points being accessed at the network layer. As NFS tries to be transparent with the filesystem, it's hard to implement qos on the filesystem too. However, your system seem to be bogged down with reads. Perhaps you can distibute a readonly copy of your data exports, via rsync, or use something like http://www.linuxjournal.com/article/9769 – The Unix Janitor Jul 19 '12 at 15:25

1 Answers1

1

So, taken into account my further trouble shooting.

Upgrade to 10G ethernet, or Multiple 10G ethernet links, using Adapter teaming to get increased bandwidth between server and clients.

Else

Install 1G adapters in NFS server, again team then together for increased bandwidth

Investigate Jumbo packet sizes and optimise NFS Maximum packet sizes.

collect performance information with nfsstat

Else

Break server into two NFS servers (physical or virtual) (home and data) , Do Network QOS based on server ip address.

Else

Read this http://sourceware.org/cluster/doc/nfscookbook.pdf

It has idea's to build a clusters for both load balancing and fault tolerance. It increase the complexity , but you'll be able to keep adding bandwidth as long as you've got the network backbone to cope :-)

I don't think there is a way to do any type of QOS based on NFS,

The Unix Janitor
  • 2,388
  • 14
  • 13
  • Thans for your comments, looks like you are right and there is no way to do this without splitting to different servers. – abbot Jul 20 '12 at 11:01