2

I am developing a SaaS where a user will be able to upload a custom JavaScript function that runs when an event happens. In order to preserve the integrity of the system, I am using AWS Lambda to run these functions such that they can run in isolation away from the rest of the system.

In general, a paying customer will be allocated their own individual lambda that they can deploy code to via my web interface. It will be their responsibility to secure their code because they are paying for it.

However, I would also like to offer a demo option such that a new customer can upload some custom code and see how the system works before they take their wallet out. The problem thus follows:

Assume an AWS Lambda exists with bare minimum IAM permissions (it cannot even generate CloudWatch logs). It is linked to API Gateway so that it is invoked whenever a certain URL is requested, for example POST /demo. A prospective user can POST arbitrary JavaScript code to the Lambda (assume Node.js v12 runtime) which it will then eval and return the result. Another user can POST arbitrary code after them, and so on and so forth.

This plan sounds feasible but then I was reading through the AWS Lambda whitepaper and saw that sometimes execution environments are re-used between invocations. This means that a malicious user could write a file to /tmp that could later be read by another user. Frankly, I don't care if /tmp is the only writable directory. I am however concerned if a user can overwrite the source code itself, replacing my index.js with their own malicious version. I assume that since the whitepaper explicitly states that /tmp is writable, the rest of the filesystem is read-only. This means that an attacker cannot modify my source code and hijack the function. Indeed, I just tried it via LambdaShell, and got the following error:

Command failed: printf hello > index.js
/bin/sh: index.js: Read-only file system

All well and good!

So, ServerFault, I have these remaining questions:

  1. Should the demo Lambda allow Internet access? Although an attacker can retrieve my AWS credentials they will essentially be useless if the function role is locked down properly in IAM. Infinite loops or large downloads can be halted by using a tiny timeout period for the demo Lambda.

  2. I mentioned in the beginning that each paying customer gets their own lambda. This is based on my assumption that, since the runtime is shared between executions, there is not really a way to isolate customer code from each other (furthermore, if a customer saves to /tmp, the next customer that comes along can mess it up).

However, let's assume for a moment that a customer doesn't care or use /tmp. Could it be safe to dynamically load and eval code for each customer using the same Lambda?

Presume the customer code is hosted in S3 as customerA.js, customerB.js, etc. When an event comes in, my code would somehow determine the filename of the code that should be run. The Lambda would download this code from S3, eval it, and return the result.

An attacker thus has the ability to request a filename from S3 and read it, however we can limit their ability to list files or do anything else on S3. This is still not good enough though because customer A can request customerB.js and exfiltrate it through their malicious code.

To make things difficult, let's assume that an event can be triggered via a webhook, such that someone could go on /trigger/customerA/with/params to trigger customer A's Lambda. This prevents us from randomizing filenames to make them harder to guess, because we have to be able to determine the filename from the request's URL.

I am not an IAM expert, but would it be possible to build a policy that limits S3 access based on an API gateway request URL? For example, to tell AWS that a Lambda triggered with /trigger/customerA/* can only read the S3 file customerA.js and nothing else. This would prevent an attacker from being able to read another customer's code at an infrastructural level and require no extra code from me.

I am assuming this approach is wrong, but I am interested in what others have to say. Allowing customers to share lambdas would enable me to remove the onboarding fee, which is only in place to prevent new users from creating new lambdas on my AWS account until they have provided a payment method (in part because there is a limit to how many Lambdas you can have, and in part because I don't want new users blowing up my bill until I can put them on the hook for it).

Thanks for your input, everyone!

Nexuist
  • 121
  • 2

0 Answers0