10

I have a GlusterFS volume hosted on a remote fileserver. I can mount the volume from my webservers in the same DC as well as other servers in other DCs however when I try to mount the volume on my local dev server the mount fails with the following log entry:

[2015-02-04 15:02:56.034956] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.2 (args: /usr/sbin/glusterfs --volfile-server=eros --volfile-id=/storage /var/storage)
[2015-02-04 15:02:56.065574] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2015-02-04 15:02:56.065650] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/storage)
[2015-02-04 15:02:56.065889] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
[2015-02-04 15:02:56.065905] I [fuse-bridge.c:5599:fini] 0-fuse: Unmounting '/var/storage'.
[2015-02-04 15:02:56.081713] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down

I've verified that the firewall is not blocking the packets, all machines are running the same version of GlusterFS from the same repo and I can telnet to the gluster ports from the local server but I'm still unable to mount the volume on any machines within my local network.

Any suggestions would be greatly appreciated.

silenceandshadow
  • 101
  • 1
  • 1
  • 3

5 Answers5

22

You must provide the VOLUME NAME in the mount command, not the PATH.

Matheus
  • 221
  • 1
  • 3
  • ... so when `gluster volume info` has lines `Volume Name: gv0` and `Brick1: 192.168.225.5:/export/gvA` then it is `mount -t glusterfs 192.168.225.5:gv0 /mnt/foo` and *not* `mount -t glusterfs 192.168.225.5:/export/gvA /mnt/foo` – Peter V. Mørch Sep 09 '20 at 21:02
6

If you are not using RPM packages it is possible you are experiencing this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1191176

The names of volfiles on disk was changed for improved rdma support. This change was introduced in 3.6.2.

stop glusterd, run glusterd --xlator-option *.upgrade=on -N to regenerate the volfiles, start glusterd (on all nodes).

S19N
  • 1,693
  • 1
  • 17
  • 28
1

Upgrade your volume

Tested on Proxmox (debian based), GlusterFS 3.8.8, (since 3.6.2)

Stop the service

service glusterfs-server stop

Upgrade the volume

glusterd -N "--xlator-option=*.upgrade=on"

The xlator-option expects an = and the full option has to be quoted to avoid any globbing to occurs. It's for translator options regarding volume file.

-N means the process will run in foreground. Handy if you want to see any errors.

Restart service

service glusterfs-server start

Mount the drive

 mount -t glusterfs [machine]:[volume-name] [mount-point]
Fab
  • 111
  • 5
1

I ran in to this issue today, I have ssl enabled for clients and servers. In my case I had not set the secure-access option on the client (making it use the glusterfs.ca file in /etc/ssl/)

To solve do this:

touch /var/lib/glusterd/secure-access
anteatersa
  • 125
  • 1
  • 5
-1

I had the same issue and this command helps me:

gluster volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer

ROMOPAT
  • 101
  • 1