3

The way we create buckets in our org and ensure sane ACLs around it is by providing an automated tool (that internally uses Terraform) to provision an S3 bucket. So say when a user requests for a new bucket, named testBucket we create a bucket named testBucket and also create an IAM user by the name testBucket-user. The automation ensures that the testBucket-user's policies are such that the only allowed actions to this user are :

"s3:ListBucket",
"s3:PutObject",
"s3:GetObject"

and the only allowed resource on which the above actions are allowed is the testBucket bucket.

Similarly, the automation also ensures that the automation puts bucket policies to ensure that the only actions allowed on it are the above 3 actions and only by the user testBucket-user

However, on demand (& if business justified) we do make changes to the created bucket policies as and when needed. So recently there was one such requirement, where a certain bucket needed to have a folder in it that was meant to hold all publicly intended images.

Now there were 2 options we had in order to provision the above requirement:

  1. Modify the bucket policy to allow principal:* for the folder in the bucket thus allowing all objects in that folder in the bucket to be by default public.
  2. To modify and give PutObjectACL permissions to the IAM user that has access to that bucket and let the dev manage which objects in the folder can be or can not be public.

As the security team we were more convinced of the first option just because it looked more logical. The problem with the first option however was the fact that now any object (publicly intended or even otherwise) in this folder would be by default public.

I wonder what does the community think here around it? AWS/IAM experts, what would be your choice of the two options above and why ?

qre0ct
  • 1,492
  • 3
  • 19
  • 30

3 Answers3

2

Restrict access to your S3 buckets or objects by:

  • Writing AWS Identity and Access Management (IAM) user policies that specify the users that can access specific buckets and objects. IAM policies provide a programmatic way to manage Amazon S3 permissions for multiple users. For more information about creating and testing user policies, see the AWS Policy Generator and IAM Policy Simulator.
  • Writing bucket policies that define access to specific buckets and objects. You can use a bucket policy to grant access across AWS accounts, grant public or anonymous permissions, and allow or block access based on conditions. For more information about creating and testing bucket policies, see the AWS Policy Generator. [[Note: You can use a deny statement in a bucket policy to restrict access to specific IAM users, even if the users are granted access in an IAM policy.]]
  • Using Amazon S3 Block Public Access as a centralized way to limit public access. Block Public Access settings override bucket policies and object permissions. Be sure to enable Block Public Access for all accounts and buckets that you don't want publicly accessible.
  • Setting access control lists (ACLs) on your buckets and objects. [[Note: If you need a programmatic way to manage permissions, use IAM policies or bucket policies instead of ACLs. However, you can use ACLs when your bucket policy exceeds the 20 KB maximum file size. Or, you can use ACLs to grant access for Amazon S3 server access logs or Amazon CloudFront logs.]]

Extra Points!

In addition to using policies, Block Public Access, and ACLs, you can also restrict access to specific actions in these ways:

  • Enable MFA Delete, which requires a user to authenticate using a multi-factor authentication (MFA) device before deleting an object or disabling bucket versioning.
  • Set up MFA-protected API access, which requires that users authenticate with an AWS MFA device before they call certain Amazon S3 API operations.
  • If you temporarily share an S3 object with another user, create a presigned URL to grant time-limited access to the object. For more information, see Share an Object with Others.

Ref Link: https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/

2

I know it isn't the answer to your question -- but I think having a completely separate bucket for the publicly accessible items makes more sense than trying to limit a single folder within the bucket. Buckets are freely created, and hence there's little value in keeping all private and public objects in a single bucket, as they're likely to have different encryption, storage, archival and lifecycle requirements.

Publicly available images are unlikely to be access via the S3 API, and more likely via http, hence it's usually good practice to limit S3 access via Cloudfront, and limit all interaction to the bucket via http -- unless your requirement specifically states the bucket itself has to be public.

Finally, if you really must, I'm more inclined to place a principal:* for the folder in question -- which is far easier detect and glance through. The bucket policy should only allow s3:GetObject for objects in this 'folder', as anything else would be bad. Also s3:ListBucket is applicable to the entire bucket and not just a folder in the bucket -- so beware.

Remember, S3 doesn't have a filesystem per say, it just looks like a folder, but S3 has internally parsed the key of folder/item to look like an item in a folder.

keithRozario
  • 3,571
  • 2
  • 12
  • 24
1

I don't consider myself an expert of S3, but I think the first option makes more sense for several reasons. The most important one is that a developer will inevitably set the wrong ACL at some point and allow public access. Not out of malice, not out of laziness, but out of ignorance (e.g. what is the developer turnover on your company?, because that is how fast knowledge about this things go away).

So, a very important point is that you can actually do both (bucket policy + IAM), and I have the feeling this is the recommended secure approach. Users will only be able to do only the operations that are an intersection of the permissions assigned via bucket policy and IAM. What I like about this is that it provides a boundary to what is possible, but applied to the bucket itself. My suggestion would be to keep the bucket locked, but only allow public access to the folder, and maybe allow PutObjectACL on the objects in the public folder.

On a slight tangent but very relevant, you could use cloudfront, so the name of your S3 bucket is never revealed. On top of that you can only allow cloudfront to access the public folder. In this way the bucket remains locked down, and cloudfront is restricted only to the public folder.

Augusto
  • 398
  • 1
  • 11