1

I have a question about accessing buckets on AWS S3. Let's suppose we have a bucket that has to have public read access by everyone and only my API has to be able to PUT and DELETE items from bucket. To restrict the access, take a look at these two alternatives:

1) Let public policy configured to allow PUT, DELETE, GET methods and restrict the DELETE and PUT operations to be executed only by my API domain using CORS.

2) Set the public policy to readonly and create a Service Account to my API with PUT and DELETE permissions in the bucket.

What's the best alternative in terms of security?

Vivi
  • 69
  • 4

1 Answers1

1

This depends on where your API is calling from. AWS (not surprisingly) has many more options if you operate entirely within their platform. If you're operating outside of AWS, I'd recommend option 2. If you're operating within AWS, Option 4 Presented below is the simplest that I think meets your needs, and option 3 is presented in case of stronger paranoia.

1) CORS

Answering abstractly, relying on CORS should be fine, but make sure you are comfortable with writing CORS rules, as a lot of CORS bypass attacks occur due to improper configuration. This would make me a bit itchy, because a lot of faith has to be put into the CORS policy being well written and covering all possibilities. And it's a single-layer of defense, which I've learned to distrust. You could strengthen this, by limit the PUT/DELETE calls in the S3 Bucket Policy to only be allowed from a specific caller IP address if your API calls from a public IP address that you own (note, however, that you won't be able to put/delete objects in the AWS console either unless your web browser is using that public IP address).

2) Service Account

Using a service account would be a better option in my opinion. Not necessarily because it's a technically superior control, but because it's easier to configure and harder to mess up. The vast majority of attacks that happen these days are not because the technology is lacking, but because of poor configuration. Simpler security solutions tend to be more successful for this reason. However, this requires that you protect those Service Account credentials (I'm assuming you're thinking of using an AWS IAM user and giving the API keys to your application), which usually involves regular rotation of the keys, storage in an external encrypted store (such as Hasicorp's Vault).

3) EC2 with a Role

If your API/application is running on/calling from an EC2 instance on AWS, I'd suggest another option. You can give the instance a IAM Role with the write permissions you need. Then you can use the S3 bucket policy to allow s3:getobject publicly, and s3:deleteObject/s3:putObject to only the role you define. Then you don't have to worry about storing/managing API keys. You can add another layer by limiting the put/delete operations to those that come from a specific VPC endpoint, and put an VPC endpoint in your VPC. This would limit the put/delete operations to only come from the specific role, and only come from within the AWS VPC that you specify. Again, please note that this will preclude and write operations to your bucket objects from the AWS Console.

4) AWS API

If you are willing to accept a bit more attack surface in exchange for great simplification of the solution (which may in turn make the solution more likely to succeed), you could use an S3 Bucket Policy to allow public read access from anywhere and all other operations only allowed from your AWS account. Then you can run your API on an EC2 instance with a role granting S3 access. The delineation here is that anything with AWS API access to your s3 bucket can write to it (so your application on EC2, your browser when you're interacting with the console, any other integrations that you add using AWS Roles or API keys).

TopherIsSwell
  • 371
  • 1
  • 14