24

After reading this blog post in which the author lays out arguments against using environmental variables for storing secrets, I am unsure how to proceed with deploying my application.

His primary arguments are as follows:

  • Given that the environment is implicitly available to the process, it's hard, if not impossible, to track access and how the contents get exposed (ps -eww ).

  • It's common to have applications grab the whole environment and print it out for debugging or error reporting. So many secrets get leaked to PagerDuty that they have a well-greased internal process to scrub them from their infrastructure.

  • Environment variables are passed down to child processes, which allows for unintended access. This breaks the principle of least privilege. Imagine that as part of your application, you call to a third-party tool to perform some action—all of a sudden that third-party tool has access to your environment, and god knows what it will do with it.

  • When applications crash, it's common for them to store the environment variables in log-files for later debugging. This means plain-text secrets on disk.

  • Putting secrets in ENV variables quickly turns into tribal knowledge. New engineers who are not aware of the sensitive nature of specific environment variables will not handle them appropriately/with care (filtering them to sub-processes, etc).

These seem soundly reasonable to me, but I am not a security professional. His alternative suggestion is to use Docker's secret-keeping functionality, but that's assuming that you're using Docker....which I'm not. I'm on Heroku. So I'm kind of unsure about this now. There doesn't seem to be any support for using Vault on Heroku, best I can tell.

temporary_user_name
  • 436
  • 1
  • 5
  • 15
  • I'm pretty sure this is a duplicate. Tl;dr though, there are more secure ways to pass secret data, but it's not quite that horrible (e.g. you can't access `/proc//environ` without running as the same user). – forest Nov 16 '18 at 08:30
  • // , It's a duplicate, but of an SO question: stackoverflow.com/a/4136344/2146138 seems to suggest that, assuming an application's user is the only one that can read a configuration file, using environment variables isn't that much more secure than putting the secrets into that configuration file with proper Linux file protections in place. Duplicate or not, it's a story worth the telling to hear. – Nathan Basanese Jan 11 '19 at 02:47
  • // , I asked about a specific case of "put secrets in environment variables or a properly protected file" here: https://security.stackexchange.com/questions/201245/how-do-i-protect-the-azure-client-id-and-client-secret-in-hashicorp-vaults-with/201248 – Nathan Basanese Jan 11 '19 at 02:48

1 Answers1

11

In general storing secrets in environment variables does have some downsides, as Diogo says in his post.

Generically, for platforms like Heroku, or using technologies like Docker where the application is expected to be ephemeral, dedicated secrets management tools are the best way to go. The idea is that there should be a tool which holds the secret in encrypted form and provides it to the application at runtime.

The secret can then live inside the application, generally as a file, which can be read and the secrets used from that file as needed.

Two examples of tools in this area are Hashicorp Vault and Square's Keywhiz.

In addition to this if you're deploying on a cloud, generally the cloud provider should have some kind of secrets management facility, for example AWS Secrets Manager.

I've not had much experience with secrets management on Heroku, however they do seem to have an add-on called ice which operates in this area.

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
  • I'm having a bit of an issue comprehending how these services help. Firstly, as best I can tell, all ICE does is generate credentials to use with AWS KMS. Sure, that's fine, I guess. But then here's the way I'm thinking about it: let's say I follow [the Ruby example](https://devcenter.heroku.com/articles/ice#using-with-rails) listed in the ICE documentation. (cont'd) – temporary_user_name Nov 17 '18 at 03:45
  • 1
    (cont'd) Excellent, now my credentials are encrypted. But....the ICE credentials are still stored in plaintext as environmental variables, as demonstrated in that code. So a malicious piece of code could just access them and then use AWS KMS to decrypt my actual secrets the same way I'm doing. So how is anything really changed here? – temporary_user_name Nov 17 '18 at 03:45
  • @temporary_user_name. I agree. The machine that ends up using the credentials needs to be authenticated. Proxying that authentication via e.g. AWS Secrets manager does not make it more secure. **However**: if the credentials are compromised, it can be as easy as a 1-click to create new credentials if automated well. You then need to change only one password/keypair as opposed to quite a lot. – marstato Dec 28 '18 at 10:43
  • // , Hell yes, Rory. HashiCorp Vault is the more "enterprise-ey" of the two, and I have personally used it to solve this problem on a global scale for RSA keys. Thanks for not only answering the question, but also including possible solutions to the attendant problems. – Nathan Basanese Jan 14 '19 at 21:14
  • 1
    What about storing secrets in .env.developement (for testing purposes, e.g. pre-filled password fields for recurring login testing)? Given that this .env file is only used locally and not commited to the repository. – elMeroMero Oct 28 '20 at 13:13
  • It uses an internal private meta service to acknowledge that it can request secrets. – Oliver Dixon Aug 25 '22 at 12:28