2

Goal

I wanted to try glusterfs geo-replication in some virtualboxes on my computer for later use distributed on multiple sites.

Configuration

I installed glusterfs 3.6 on ubuntu 32bit servers like this:

add-apt-repository -y ppa:gluster/glusterfs-3.6
apt-get update -qq
apt-get install glusterfs-server -y

In /etc/hosts on every virtualbox is an entry like this so that i can use hostnames:

192.168.1.1 ivymaster.com
192.168.1.2 ivyslave2.com
192.168.1.3 ivyslave1.com

Setup

First I created and startet a volume on master (force to create on rootfs):

gluster volume create master ivymaster.com:/var/glustermaster/ force
gluster volume start master

Works fine. I setup passwordless root login with ssh-copy-id and logged in one time manually to check if this is setup correct and the host is stored in known_hosts.

I was not able to setup synchronisation into a directory like described in Geo-Replication Terminology - Understanding the URI. Creation of geo-replication failed with URI problem.

gluster volume geo-replication master ivyslave2.com:/var/slave2 start
 Staging failed on localhost. Please check the log file for more details.

Errors after creating successfully replication

Logfile contains entries like Invalid slave name, Unable to store slave volume name, Unable to fetch slave or confpath details.

When I create a volume on ivyslave2.com and create geo-replication with usage of this volume, this works:

 gluster volume geo-replication master ivyslave2.com::slave2 create push-pem force
  Creating geo-replication session between master & ivyslave2.com::slave2 has been successful

Unfortunately, gluster volume geo-replication master ivyslave2.com::slave2 status sais that the status of the replication is faulty.

MASTER NODE      MASTER VOL    MASTER BRICK                SLAVE                    STATUS    CHECKPOINT STATUS    CRAWL STATUS
--------------------------------------------------------------------------------------------------------------------------------
ivyVirtMaster    master       /var/glusterfs_master_nv        ivyslave2.com::slave2    faulty    N/A                  N/A

After executing this command, logfile on master contains Using passed config template(/var/lib/glusterd/geo-replication/master_ivyslave2.com_slave2/gsyncd.conf)., Unable to read gsyncd status file, Unable to read the statusfile for /var/glusterfs_master_nv brick for master(master), ivyslave2.com::slave2(slave) session.

Issue with tune2fs?

Logfile on slave contains Received status volume req for volume slave2, tune2fs exited with non-zero exit status, failed to get inode size.

Is this the volume on slave faulty? Is this issue related to tune2fs exited with non-zero exit status? How can a geo replication be setup without a volume? Is there something wrong in geo replication configuration?

Relation This is a duplicate of post in stackoverflow.

0 Answers0