I am running Ansible on an EC2 instance with an assigned Iam role. I am running this playbook:
$ cat s3.yaml
---
- hosts: localhost
remote_user: ec2-user
tasks:
- name: download ec2.py from s3
s3:
bucket: mybucket
object: /ec2.py
dest: /tmp/ec2.py
mode: get
Running it with -vvv provides this error message:
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"bucket": "mybucket",
"dest": "/tmp/ec2.py",
"ec2_url": null,
"encrypt": true,
"expiry": "600",
"headers": null,
"marker": null,
"max_keys": "1000",
"metadata": null,
"mode": "get",
"object": "/ec2.py",
"overwrite": "always",
"permission": [
"private"
],
"prefix": null,
"profile": null,
"region": null,
"retries": 0,
"rgw": false,
"s3_url": null,
"security_token": null,
"src": null,
"validate_certs": true,
"version": null
},
"module_name": "s3"
},
"msg": "Source bucket cannot be found"
}
According to documentation, Ansible with boto should be able to pick up the EC2 instance role.
So far I have:
- Tried the documentation, which implies that it should just work.
- Verified my boto version is new enough (boto 2.42)
- Verified the EC2 instance role has the correct permissions (
aws s3 cp s3://mybucket/ec2.py /tmp/ec2.py
works just fine) - Verified the EC2 instance credentials are available through
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
This answered question and the documentation indicates that it is possible.
Can I accomplish this without making the instance credentials available to boto/ansible directly? If yes, how. The documentation seems a bit lacking.