12

I'm setting up some machines with Ansible and need to enable password less connections between them. I've got a database master and several slaves. For initial replication the slaves need to ssh into the master and get a copy of the database. I'm not sure what is the best way to dynamically add all the slaves public keys to the masters authorized_keys file.

I already thought about providing the slaves public keys as variables and then add them via the authorized_key module. But then I must maintain the list of keys. I'm looking for an approach where I just add another host the the slaves group and the rest will work automatically.

Any ideas?

Update:

So far I got the following pseudo code:

# collect public keys from slave machines
- name: collect slave keys
  {% for host in groups['databases_slave'] %}
     shell: /bin/cat /var/lib/postgresql/.ssh/id_rsa.pub
     register: slave_keys #how to add to an array here?
  {% endfor %}

# Tasks for PostgreSQL master
- name: add slave public key
  sudo: yes
  authorized_key: user=postgres state=present key={{ item }}
  with_items: slave_keys

The loop with the {% %} only works in template files and not in playbooks directly. Any way to do this in my playbook?

soupdiver
  • 797
  • 2
  • 8
  • 26

3 Answers3

8

I've come up with a solution which works for me. I do create the public/private keys on my machine from where Ansible is run and on the first connection I put the keys in place.

Then I do add the keys from all the slaves to the master with the following:

# Tasks for PostgreSQL master
- name: add slave public key
  sudo: yes
  authorized_key: user=postgres state=present key="{{ lookup('file', '../../../keys/' + item + '/id_rsa.pub') }}"
  with_items: groups.databases_slave

The whole playbook can be found on github.com/soupdiver/ansible-cluster.

soupdiver
  • 797
  • 2
  • 8
  • 26
5

I believe the following solution should work in your case. I've been using it for a similar scenario with a central backup server and multiple backup clients.

I have a role (let's say "db_replication_master") associated to the server receiving the connections:

    - role: db_replication_master
      db_slaves: ['someserver', 'someotherserver']
      db_slave_user: 'someuser' # in case you have different users
      db_master_user: 'someotheruser'
      extra_pubkeys: ['files/id_rsa.pub'] # other keys that need access to master

Then we create the actual tasks in the db_replication_master role:

    - name: create remote accounts ssh keys
      user:
        name: "{{ db_slave_user }}"
        generate_ssh_key: yes
      delegate_to: "{{ item }}"
      with_items: db_slaves

    - name: fetch pubkeys from remote users
      fetch:
        dest: "tmp/db_replication_role/{{ item }}.pub"
        src: "~{{db_slave_user}}/.ssh/id_rsa.pub"
        flat: yes
      delegate_to: "{{ item }}"
      with_items: db_slaves
      register: remote_pubkeys
      changed_when: false # we remove them in "remove temp local pubkey copies" below

    - name: add pubkeys to master server
      authorized_key:
        user: "{{ db_master_user }}"
        key: "{{ lookup('file', item) }}"
      with_flattened:
        - extra_pubkeys
        - "{{ remote_pubkeys.results | default({}) | map(attribute='dest') | list }}"

    - name: remove temp local pubkey copies
      local_action: file dest="tmp/db_replication_role" state=absent
      changed_when: false

So we're basically:

  • dynamically creating ssh-keys on those slaves that still don't have them
  • then we're using delegate_to to run the fetch module on the slaves and fetch their ssh pubkeys to the host running ansible, also saving the result of this operation in a variable so we can access the actual list of fetched files
  • after that we proceed to normally push the fetched ssh pubkeys (plus any extra pubkeys provided) to the master node with the authorized_keys module (we use a couple of jinja2 filters to dig out the filepaths from the variable in the task above)
  • finally we remove the pubkey files locally cached at the host running ansible

The limitation of having the same user on all hosts can probably be worked around, but from what I get from your question, that's probably not an issue for you (it's slighly more relevant for my backup scenario). You could of course also make the key type (rsa, dsa, ecdsa, etc) configurable.

Update: oops, I'd originally written using terminology specific to my problem, not yours! Should make more sense now.

Leo Antunes
  • 191
  • 1
  • 3
0

I got the same issue, and I solved it this way:

---
# Gather the SSH of all hosts and add them to every host in the inventory
# to allow passwordless SSH between them
- hosts: all
  tasks:
  - name: Generate SSH keys
    shell: ssh-keygen -q -t rsa -f /root/.ssh/id_rsa -N ''
    args:
      creates: /root/.ssh/id_rsa

  - name: Allow passwordless SSH between all hosts
    shell: /bin/cat /root/.ssh/id_rsa.pub
    register: ssh_keys

  - name: Allow passwordless SSH between all hosts
    lineinfile:
      dest: /root/.ssh/authorized_keys
      state: present
      line:  " {{ hostvars[item]['ssh_keys']['stdout'] }}"
    with_items: "{{ groups['all']}}"
Julen Larrucea
  • 328
  • 1
  • 2
  • 11