ansible to use all directories in S3

0

i have ansible playbook for backup and restore Cassandra backup And as we know Cassandra is a distributed DB, so in my case i have 3 VM's, and each of this vm's has the same Keyspaces, but data in keyspaces may differ from 1 and 3'rd node, so i use aws_s3 module

- name: "cassandra maintenance | cassandra backup | upload backup to s3"    aws_s3:
     bucket: {{ bucket }}
     object: /backup/{{ timestamp }}/{{ ansible_hostname }}/{{ outer_keyspaces }}.tgz
     src: /tmp/{{ outer_keyspaces }}.tgz
     mode: put
     aws_access_key: KMFKLFMDLFGFMGKFMN
     aws_secret_key: EGMFJKEGNERGNERGNERUGBREGB
     retries: 2    register: s3put    ignore_errors: yes    serial: 1

so if u see this play create bucket on s3, with following path s3://backup/2019-05-28/VIRTUAL_MACHINE_HOSTNAME/blah-blah-blah

and here is restore playbook

- name: "cassandra maintenance | cassandra restore | download backup from s3"
  aws_s3:
    bucket: "{{ bucket }}"
    mode: list
    object: /backup/{{ timestamp }}/{{ ansible_hostname }}/{{ bucket }}/{{ outer_keyspaces }}.tgz
    dest: /tmp/{{ outer_keyspaces }}.tgz
    mode: get
    aws_access_key: femrfjmnrgnerjgnrej
    aws_secret_key: mkglermngjregnejrgnjegnejgn
    overwrite: different
  register: s3get

So logic is the same as restore i use variables and ansible_hostname in the object field, and this stuff works perfectly when u attempt backup and restore on the same nodes.

But how can i restore this backup let's say on another nodes, with different hostnames?

Joom187

Posted 2019-05-28T11:26:12.410

Reputation: 11

No answers