7

Inside the VM, host address is 10.0.2.2, local address is 10.0.2.15. (VirtualBox). This gets translated to 127.0.0.1 on the host side. To connect:

sudo mount -vvvt nfs4 -o clientaddr=127.0.0.1 10.0.2.2:/srv /mnt

I specified clientaddr because I figured the problem could be due to the addresses not matching, but it doesn't change anything. After a few minutes the client returns the usual Permission Denied message, access denied by server.

On the server side, I run

# rpc.mountd -d all -F
# rpc.idmapd -vvvf
# rpc.nfsd -d

I use systemd, so I am also monitoring the journal for any output. When I make the mount request, the following is visible over the network:

reply ERR 20: Auth Bogus Credentials (seal broken)

but nothing appears in the journal (which should have the output of rpc.nfsd) or in the output of rpc.mountd or rpc.idmapd, aside from some startup messages. Actually, in the case of rpc.mountd, I get the following occasionally:

rpc.mountd: auth_unix_ip: inbuf 'nfsd 127.0.0.1' 
rpc.mountd: auth_unix_ip: client (nil) 'DEFAULT'

As far as I am aware (please correct me!) there is no other source for information about NFS's functioning, and there is also no configuration involved. I have specified the verbose modes for each command, so I'm at a loss for how I am supposed to diagnose this issue.

I am assuming that it is a problem with my exports file, which is as follows:

/srv 127.0.0.1(rw,sync,no_subtree_check,no_root_squash)

But I would rather actually get some feedback from the system about what is going wrong than fiddle with my exports file by trial and error. So, does anyone know where I can find out more about what's going on?

Thanks!

EDIT

I recently ran exportfs -rav

and now the client immediately returns 'Operation not permitted', and rpc.mountd outputs:

rpc.mountd: auth_unix_ip: inbuf 'nfsd 127.0.0.1'
rpc.mountd: v4root_create: path '/' flags 0x12401
rpc.mountd: v4root_create: path '/srv' flags 0x10401
rpc.mountd: auth_unix_ip: client 0x1d69d70 '127.0.0.1'
rpc.mountd: nfsd_fh: inbuf '127.0.0.1 1 \x00000000'
rpc.mountd: nfsd_fh: found 0x1d73e90 path / 

but this output may just be related to having run exportfs. (Note that I restarted the daemons several times before, so I don't know how exportfs made a difference)

OK, it seems that adding the 'insecure' option has fixed it:

secure This option requires that requests originate on an  Internet  port  less  than
       IPPORT_RESERVED. (1024). This option is on by default. To turn it off, specify 
       insecure.

This is odd, since I was running the NFS client as root.

In any case, why wasn't this issue made apparent to the operator (myself) ? I don't see how a piece of software can be considered fit for production use if its diagnostics are kept completely hidden, so as to render it inaccessible to non-experts.. I don't mean to bash NFS here, but it seems like a notoriously obfuscated system that could really use some more transparency given how frequently it is used.. Anyway thanks for reading.

A__A__0
  • 393
  • 2
  • 6
  • 16
  • love the fact that this has been viewed over 2k times, what a joke – A__A__0 Oct 29 '17 at 02:28
  • 3
    (i.e. the fact that there are thousands, if not hundreds of thousands of man-hours being wasted by people scouring the internet for clues about how to get NFS to work because it is so obfuscated) – A__A__0 Mar 17 '18 at 22:22
  • ive been looking how to get this sorted the whole day. why in the holy jesus is NFS so complicated – Erik K Jan 11 '20 at 19:20

2 Answers2

0

One thing to try is to test wide open permissions in /etc/exports (0.0.0.0/0 is probably the correct wide open). If that works then it's probably something to do with NFS not quite appreciating where the client request is coming from even though I notice you mentioned that the network traffic is NAT'd.

ekeyser
  • 165
  • 4
0

This may not solve anyone else's problem but here's what worked for me. After changing the server's hostname and rebooting, I was getting this "Auth Bogus Credential" bullsh-t.

  1. Ensure sure you executed your bind mount. For me, my export was /srv/nfs4/foo so I did

    sudo mount --bind /home/jay/foo /srv/nfs4/foo

  2. Clear nfs etab cache and re-export your exports

    exportfs -rav

Like magic, it works again. I put those two things into a script so if I need them again I won't have to go hunting and swearing.

#!/bin/bash
sudo mount --bind /home/jay/foo /srv/nfs4/foo
exportfs -rav
Jay
  • 1