17

I am using GlusterFS to create and mount volumes across 4 machines. Say for example, the machines are called machine1, machine2, machine3 and machine4.

My peers have already been successfully probed.

I have used the following command to create my volume:

sudo gluster volume create ssl replica 2 transport tcp machine1:/srv/gluster/ssl machine2:/srv/gluster/ssl machine3:/srv/gluster/ssl machine4:/srv/gluster/ssl force

I then start the volume with:

sudo gluster volume start ssl

I have mounted the directory /myproject/ssl using the following command:

sudo mount -t glusterfs machine1:/ssl /myproject/ssl

When mounted on each machine, everything works as expected and the /myproject/ssl directory has data shared across all machines.

The question is, how on Earth do I do this the Ansible way?

Here are my attempts at doing those two commands the Ansible way:

- name: Configure Gluster volume.
  gluster_volume:
    state: present
    name: "{{ gluster.brick_name }}"
    brick: "{{ gluster.brick_dir }}"
    replicas: 2
    cluster: "{{ groups.glusterssl | join(',') }}"
    host: "{{ inventory_hostname }}"
    force: yes
  become: true
  become_user: root
  become_method: sudo
  run_once: true
  ignore_errors: true

- name: Ensure Gluster volume is mounted.
  mount:
    name: "{{ gluster.brick_name }}"
    src: "{{ inventory_hostname }}:/{{ gluster.brick_name }}"
    fstype: glusterfs
    opts: "defaults,_netdev"
    state: mounted
  become: true
  become_user: root
  become_method: sudo

Despite a peer probe already returning successful on a previous task, the Configure Gluster volume task fails with:

fatal: [machine3]: FAILED! => 
  {"changed": false, 
   "failed": true, 
   "invocation": {
     "module_args": {
       "brick": "/srv/gluster/ssl",
       "bricks": "/srv/gluster/ssl", 
       "cluster": ["machine1", "machine2", "machine3", "machine4"],
       "directory": null, 
       "force": true, 
       "host": "machine3", 
       "name": "ssl", 
       "options": {}, 
       "quota": null, 
       "rebalance": false, 
       "replicas": 2, 
       "start_on_create": true, 
       "state": "present", 
       "stripes": null, 
       "transport": "tcp"}, 
     "module_name": "gluster_volume"}, 
   "msg": "failed to probe peer machine1 on machine3"}

If I replace this Ansible task with the first shell command I suggested, everything works fine, but then the Ensure Gluster volume is mounted fails with:

fatal: [machine3]: FAILED! => 
  {"changed": false, 
   "failed": true, 
   "invocation": {
     "module_args": {
       "dump": null, 
       "fstab": "/etc/fstab", 
       "fstype": "glusterfs", 
       "name": "ssl", "opts": 
       "defaults,_netdev", 
       "passno": null, "src": 
       "machine3:/ssl", 
       "state": "mounted"}, 
     "module_name": "mount"}, 
   "msg": "Error mounting ssl: Mount failed. Please check the log file for more details.\n"}

The relevant log output is:

[2016-10-17 09:10:25.602431] E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap
_cbk] 2-ssl-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2016-10-17 09:10:25.602480] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 2-ssl-client-3: disconnected from ssl-client-3. Client process will keep trying to connect to glusterd until brick's port is available
[2016-10-17 09:10:25.602500] E [MSGID: 108006] [afr-common.c:3880:afr_notify] 2-ssl-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
[2016-10-17 09:10:25.616402] I [fuse-bridge.c:5137:fuse_graph_setup] 0-fuse: switched to graph 2

So, the volume doesn't get started by the Ansible task.

My question is, essentially, how do I create, mount and start a volume in the same way I did with the 3 commands mentioned above, the Ansible way?

Bruce Becker
  • 277
  • 1
  • 4
  • 18
Karim Tabet
  • 273
  • 2
  • 8
  • 6
    Not sure if you ever figured this out or not but wanted to share an Ansible role of mine for GlusterFS that might get you headed in the right direction. https://github.com/mrlesmithjr/ansible-glusterfs – mrlesmithjr Feb 17 '18 at 13:28

1 Answers1

3

You should start the volume with state: started:

- name: Configure Gluster volume.
  gluster_volume:
    state: started
    name: "{{ gluster.brick_name }}"
    brick: "{{ gluster.brick_dir }}"
    replicas: 2
    cluster: "{{ groups.glusterssl | join(',') }}"
    host: "{{ inventory_hostname }}"
    force: yes
  become: true
  become_user: root
  become_method: sudo
  run_once: true
  ignore_errors: true