4

I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected:

root@GlusterNode1a:~# gluster peer status
Number of Peers: 3

Hostname: gluster-1b
Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2
State: Peer in Cluster (Connected)

Hostname: gluster-2b
Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9
State: Peer in Cluster (Connected)

Hostname: gluster-2a
Uuid: 72405811-15a0-456b-86bb-1589058ff89b
State: Peer in Cluster (Connected)

I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log:

copy(/storage/152627/dat): failed to open stream: Structure needs cleaning
readfile(/storage/1438227/dat): failed to open stream: Input/output error
unlink(/storage/189457/23/dat): No such file or directory

Finally, I have found out some bricks are offline:

root@GlusterNode1a:~# gluster volume status
Status of volume: storage
Gluster process            Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster-1a:/storage/1a    24009  Y  1326
Brick gluster-1b:/storage/1b    24009  N  N/A
Brick gluster-2a:/storage/2a    24009  N  N/A
Brick gluster-2b:/storage/2b    24009  N  N/A
Brick gluster-1a:/storage/3a    24011  Y  1332
Brick gluster-1b:/storage/3b    24011  N  N/A
Brick gluster-2a:/storage/4a    24011  N  N/A
Brick gluster-2b:/storage/4b    24011  N  N/A
NFS Server on localhost          38467  Y  24670
Self-heal Daemon on localhost        N/A  Y  24676
NFS Server on gluster-2b      38467  Y  4339
Self-heal Daemon on gluster-2b    N/A  Y  4345
NFS Server on gluster-2a      38467  Y  1392
Self-heal Daemon on gluster-2a    N/A  Y  1402
NFS Server on gluster-1b      38467  Y  2435
Self-heal Daemon on gluster-1b    N/A  Y  2441

What can I do about that? I need to fix it.

Note: CPU and Network usage of all the four nodes are about the same.

Roman Newaza
  • 632
  • 4
  • 13
  • 22

1 Answers1

1

I have resolved the issue with the help of JoeJulian from Freenode#Gluster. When I examined process list, there were some processes using old configuration: /usr/sbin/glusterfsd -s localhost --volfile-id storage.11.111.111.11.storage-2b.... After executing this command:

killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd

Situation is resolved - all bricks are online:

# gluster volume status
Status of volume: storage
Gluster process                     Port    Online  Pid
------------------------------------------------------------------------------
Brick gluster-1a:/storage/1a        24009   Y   17302
Brick gluster-1b:/storage/1b        24009   Y   12188
Brick gluster-2a:/storage/2a        24009   Y   10863
Brick gluster-2b:/storage/2b        24009   Y   13486
Brick gluster-1a:/storage/3a        24011   Y   17308
Brick gluster-1b:/storage/3b        24011   Y   12194
Brick gluster-2a:/storage/4a        24011   Y   10869
Brick gluster-2b:/storage/4b        24011   Y   13492
NFS Server on localhost                 38467   Y   17314
Self-heal Daemon on localhost               N/A Y   17320
NFS Server on gluster-2a            38467   Y   10879
Self-heal Daemon on gluster-2a      N/A Y   10885
NFS Server on gluster-2b            38467   Y   13503
Self-heal Daemon on gluster-2b      N/A Y   13509
NFS Server on gluster-1b            38467   Y   12200
Self-heal Daemon on gluster-1b      N/A Y   12206
Roman Newaza
  • 632
  • 4
  • 13
  • 22