61

I'm really flailing around in AWS trying to figure out what I'm missing here. I'd like to make it so that an IAM user can download files from an S3 bucket - without just making the files totally public - but I'm getting access denied. If anyone can spot what's off I'll be stoked.

What I've done so far:

  • Created a user called my-user (for sake of example)
  • Generated access keys for the user and put them in ~/.aws on an EC2 instance
  • Created a bucket policy that I'd hoped grants access for my-user
  • Ran the command aws s3 cp --profile my-user s3://my-bucket/thing.zip .

Bucket policy:

{
  "Id": "Policy1384791162970",
  "Statement": [
    {
      "Sid": "Stmt1384791151633",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::my-bucket/*",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/my-user"
      }
    }
  ]
}

The result is A client error (AccessDenied) occurred: Access Denied although I can download using the same command and the default (root account?) access keys.

I've tried adding a user policy as well. While I don't know why it would be necessary I thought it wouldn't hurt, so I attached this to my-user.

{
  "Statement": [
    {
      "Sid": "Stmt1384889624746",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

Same results.

Eric Hammond
  • 10,901
  • 34
  • 56
Josh Gagnon
  • 735
  • 1
  • 5
  • 6

8 Answers8

42

I was struggling with this, too, but I found an answer over here https://stackoverflow.com/a/17162973/1750869 that helped resolve this issue for me. Reposting answer below.


You don't have to open permissions to everyone. Use the below Bucket policies on source and destination for copying from a bucket in one account to another using an IAM user

Bucket to Copy from – SourceBucket

Bucket to Copy to – DestinationBucket

Source AWS Account ID - XXXX–XXXX-XXXX

Source IAM User - src–iam-user

The below policy means – the IAM user - XXXX–XXXX-XXXX:src–iam-user has s3:ListBucket and s3:GetObject privileges on SourceBucket/* and s3:ListBucket and s3:PutObject privileges on DestinationBucket/*

On the SourceBucket the policy should be like:

{
"Id": "Policy1357935677554",
"Statement": [
    {
        "Sid": "Stmt1357935647218",
        "Action": [
            "s3:ListBucket"
        ],
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::SourceBucket",
        "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
    },
    {
        "Sid": "Stmt1357935676138",
        "Action": ["s3:GetObject"],
        "Effect": "Allow",
        "Resource": "arn:aws:s3::: SourceBucket/*",
        "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
   }
]
}

On the DestinationBucket the policy should be:

{
"Id": "Policy1357935677554",
"Statement": [
    {
        "Sid": "Stmt1357935647218",
        "Action": [
            "s3:ListBucket"
        ],
        "Effect": "Allow",
        "Resource": "arn:aws:s3::: DestinationBucket",
        "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
    },
    {
        "Sid": "Stmt1357935676138",
        "Action": ["s3:PutObject"],
        "Effect": "Allow",
        "Resource": "arn:aws:s3::: DestinationBucket/*",
        "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
   }
]
}

command to be run is s3cmd cp s3://SourceBucket/File1 s3://DestinationBucket/File1

Sergio
  • 556
  • 5
  • 7
  • 2
    Oh my god, you're my hero. I was just missing the ListBucket permission at the bucket level. I still don't know why I need to ls the bucket in order to cp an object from it, but that's okay. Maybe it's only a quirk of using the aws command? – Josh Gagnon Nov 20 '13 at 00:03
  • 1
    Yeah, it's pretty strange. You would think having a single policy of s3:* (however unsecure that may be) would be enough for sanity testing. – Sergio Nov 20 '13 at 16:44
  • 1
    fml, 2 days wasted for that ListBucket permission. good catch – chaqke Jul 09 '15 at 01:50
  • Spent lot of time.. This was the needed answer. ListBucket -- bucketname, GetObject -- bucketname/* – rsmoorthy Jul 24 '16 at 02:53
  • i was quite new with AWS, and am using windows, so it took me a while to get the values right and s3cmd working on my system. For those with the same issues. Then for `src–iam-user` go to your aws > IAM > User > User ARN and for `DestinationBucket` and `SourceBucket` go to aws > s3 > click the list o the bucket > You will get the desired value. And for s3cmd setup, follow this: https://tecadmin.net/setup-s3cmd-in-windows/ – Bhanu Jan 05 '20 at 17:32
16

When I faced the same issue it turned out that AWS required server-side encryption to be enabled. So the following command worked successfully for me:

aws s3 cp test.txt s3://my-s3-bucket --sse AES256
zjor
  • 261
  • 2
  • 2
  • 3
    Thanks! In my case it was `--sse aws:kms` to use the bucket "default"... – Michael Yoo Jul 02 '18 at 02:08
  • If you are using a non-default KMS key, you need to pass that as well: `--sse-kms-key-id 0123-abc-etc` However, the part that isn't clear is that to use your own KMS key you must have the IAM permission `kms:GenerateDataKey` or you will still get access denied. – digarok Mar 28 '19 at 13:48
  • The question is about Download.. you are making an upload to an encrypted S3, hence the requirement for the key. – Ilhicas Aug 23 '19 at 12:02
  • gd damn-it AWS!! why why do they do this to people, enabling encryption on a bucket should be an internal thing, needing to pass --sse aws:kms is so silly to begin with but to give an error CopyObject operation: Access Denied b/c not passing this param is just ridiculous – bjm88 Sep 25 '20 at 03:25
6

Even if your IAM policies are set up correctly, you can still get an error like An error occurred (AccessDenied) when calling the <OPERATION-NAME> operation: Access Denied due to MFA (Multi-Factor Authentication) requirements on your credentials. These can catch you off guard because if you've already logged into the AWS console it will appear that your credentials are working fine, and the permission denied error message from aws cli is not particularly helpful.

There are some good instructions already on how to set up MFA with aws cli:

Basically, you need the need to get to address of your MFA device, and send that with the code from your device to get a temporary token.

Mark Chackerian
  • 247
  • 4
  • 4
5

I wouldn't recommend the 'Any authenticated AWS user' option mentioned by James.

Doing so adds a bucket-level ACL that allows any AWS account (not just your IAM users) to list/delete/modify-acls for that bucket.

i.e. public read/write for anyone with an aws account.

Andrew
  • 51
  • 1
  • 1
  • Have you tested this? I was under the impression that AWS account actually means any entity withing my organisation - i.e. a user, an EC2 instance, an IAM role, but not someone from a different account. I could be wrong, and I'll edit my contribution and quickly audit my buckets if that's the case. Thanks. – James Dunmore Jul 20 '16 at 10:15
  • 1
    Yup. "Authenticated User" grantee in S3 ACLs means all AWS accounts. It enforces signed requests, but nothing more. Here's a reference: [link](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee-predefined-groups) – Andrew Jul 21 '16 at 11:53
3

I managed to fix this without having to write polices - from the S3 console (web ui) I selected the bucket and in the permissions tab chose "Any Authenticated AWS User" and ticket all the boxes.

UPDATE: as pointed out in comments "Any Authenticated AWS User" isn't just users in your account it's all AWS authenticated user, please use with caution

  • I imagine that's creating a policy for you. Ticking all the boxes is going to get you ListBucket, etc. and more. – Josh Gagnon Mar 01 '16 at 16:28
  • I'm sure it is - I just know that writing policies can be a pain, those tick boxes may give you a bit more but a nice quick fix – James Dunmore Mar 02 '16 at 17:30
0

I just simply went on the webUI on and clicked on the bucket, then went to permissions and then went to policy. When I opened it up I just clicked delete. I did this for I think it was configuration as well.

I went back to the main s3 page, then clicked on the bucket and attempted to delete it and it worked.

even when I did it by aws-cli using

$ aws s3 rb s3://bucket-name --force  

Anyway, that is the thing that worked for me. The policy on permissions is stopping you from deleting the bucket.

0

Once I got this error by simply trying to run:

aws s3 cp s3://[bucketName]/[fileName] .

in a folder where I didn't have permissions. It's silly, but make sure you are the owner of the folder you are in before moving on!

0

The issue arises when u insert invalid resource or object names .I had the same issue with boto3 (in my case it was invalid bucket name)

yunus
  • 101