4

I am trying to mount hdfs on my local machine running Ubuntu using the following command :---

sudo mount -t  nfs  -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/

But I am getting this error:-

mount.nfs: mount system call failed

Output for

rpcinfo -p 192.168.170.52

is

        program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  48435  status
        100024    1   tcp  54261  status
        100005    1   udp   4242  mountd
        100005    2   udp   4242  mountd
        100005    3   udp   4242  mountd
        100005    1   tcp   4242  mountd
        100005    2   tcp   4242  mountd
        100005    3   tcp   4242  mountd
        100003    3   tcp   2049  nfs

Output for

showmount -e 192.168.170.52

is

Export list for 192.168.170.52:
/ *

I also tried by adding

<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>

in my core-site.xml file located in /etc/hadoop/conf.pseudo. But it did not work.

Output for :-

sudo mount -v -t  nfs  -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/

is:---

mount.nfs: timeout set for Thu Jun 29 09:46:30 2017
mount.nfs: trying text-based options 'vers=3,proto=tcp,nolock,addr=192.168.170.52'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.170.52 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 192.168.170.52 prog 100005 vers 3 prot TCP port 4242
mount.nfs: mount(2): Input/output error
mount.nfs: mount system call failed

Please help me with this.

Bhavya Jain
  • 141
  • 1
  • 1
  • 3

1 Answers1

2

what @84104 is saying is true but I manage to start it with following config/steps :

  1. install nfs
  2. change /etc/hadoop/hdfs-site.xml

    <property>
      <name>hadoop.proxyuser.YOUR_HOSTNAME_NAME.hosts</name>
      <value>*</value>
    </property>
    
    <property>
      <name>nfs.superuser</name>
      <value>spark</value>
    </property>
    
  3. change /etc/hadoop/core-site.xml

    <property>
      <name>hadoop.proxyuser.root.groups</name>
      <value>*</value>
    </property>
    <property>
      <name>hadoop.proxyuser.root.hosts</name>
      <value>*</value>
    </property>
    
  4. stop hadoop

  5. start hadoop
  6. mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync YOUR_HOSTNAME_NAME:/ /data/hdfs/ -v
Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47