6

There seems to be a bit of a "chicken and egg" problem with the passwords to the password managers like Hashicorp Vault for Linux.

While researching this for some Linux servers, someone clever asked, "If we're storing all of our secrets in a secrets storage service, where do we store the access secret to that secrets storage service? In our secrets storage service?"

I was taken aback, since there's no point to using a separate secrets storage service if all the Linux servers I'd store the secrets on anyway have its access token.

For example, if I move my secrets to Vault, don't I still need to store the secrets to access Hashicorp Vault somewhere on the Linux server?

There is talk about solving this in some creative ways, and at least making things better than they are now. We can do clever things like auth based on CIDR or password mashups. But there is still that trade-off of security For example, if a hacker gains access to my machine, they can get to vault if the access is based on CIDR.

This question may not have an answer, in which case, the answer is "No, this has no commonly accepted silver bullet solution, go get creative, find your tradeoffs bla bla bla"

I want an answer to the following specific question:

Is there a commonly accepted way that one secures the password to a remote, automated secrets store like Hashicorp Vault on modern Linux servers?

Obviously, plaintext is out of the question.

Is there a canonical answer to this? Am I even asking this in the right place? I considered security.stackexchange.com, too, but this seemed specific to a way of storing secrets for Linux servers. I'm aware that this may seem too general, or opinion based, so I welcome any edit suggestions you might have to avoid that.

We laugh, but the answer I get on here may very well be "in vault". :/ For instance, a Jenkins server or something else has a 6-month revokable password that it uses to generate one-time-use tokens, which they then get to use to get their own little ephemeral (session limited) password generated from Vault, which gets them a segment of info.

Something like this seems to be along the same vein, although it'd only be part of the solution: Managing service passwords with Puppet

Cœur
  • 105
  • 5
Nathan Basanese
  • 321
  • 1
  • 4
  • 19
  • It basically does not exist. If you have data, and someone breaks in to the place where the data is, they now have that data. One way to make it harder is to have to manually enter a key once a server is booted, which is stored in the CPU registers. It doesn't help against someone with root access, but it's practically the last stop before full paranoia and turning back to wax seals and carrier birds. – John Keates Oct 15 '16 at 02:15
  • Usually you would use an HSM (Hardware Security Module) to protect sensible information. Thus an attacker who becomes root on this machine, could use this sensible key material but could not steal it. But in the case of "password" or "symmetric keys" this is difficult to implement, if the HSM has to give the password or symmetric key to a process being used. You could encrypt the password in the HSM and let the password decrypt it. But also in this case the attacker could decrypt the password and go off with it. – cornelinux Oct 16 '16 at 08:15
  • 1
    You could take this answer to security.stackexchange.com. – cornelinux Oct 16 '16 at 08:18
  • // , I could. How would I do this, and on what basis? – Nathan Basanese Oct 16 '16 at 08:30
  • // , @JohnKeates, that's true as far as it goes. But how far it goes, of course, depends on the definition of "breaks in". For instance, it's possible for me to get access to a system's core dumps or disks without getting root privileges, and even Root privileges can potentially be limited by a TPM in some ways. I can also get access to a secret without access to the network from which the secret is typically used. What do you think of my answer(s)? – Nathan Basanese Jun 27 '18 at 22:28

1 Answers1

0

// , First of all, the problem discussed here goes beyond mere delivery of secret 0 or what they call "secure introduction" in ops parlance.

Rather, this is about securing the secret, once received.

I don't have a silver bullet solution to this. But there are a few defense-in-depth options:

  1. Use response wrapping for delivery of the secret.
  2. Place CIDR restrictions around token to the Secret Store, such that the token is only usable from a specific set of IP addresses, and use reliable protocols like PROXY (NOT X-Forwarding) to pass the IP addresses to the secret store (e.g. set token_bound_cidrs in a way that only one subnet can ever use the token.)
  3. Store the secret in memory only, and lock the memory with mlock.
  4. If possible, place time limits on the secret itself, or even allow the secret to only be used once
  5. Monitor for unusual use of the secret, e.g. the regular client should alert if its one-time-use secret doesn't work (because it's already been used) and the server should alert if someone is trying to use the secret from outside the allowed CIDR range
  6. This is kind of going out on a limb, here, but you might allow a "honeypot" secret to exist on the server along with the regular secret, if possible, that gives "access" to a set of credentials to a system that just records access and alerts.
  7. Require re-authentication for each use of the locally stored secret, which would mean additional authentication factors aside from locally stored secrets need to be applied with each use, e.g. signed meta-data unique to the compute instance or workload, or, in Vault, an Approle
  8. Disable any sort of disk caching, to prevent the secret from touching any potentially persistent storage
Nathan Basanese
  • 321
  • 1
  • 4
  • 19
  • // , I didn't add anything about TPM, because my lack of experience with it means that my answers would be even more hand-wavy than the 8 I listed already. But if someone has experience with TPM, I would love to see your answers. – Nathan Basanese Jun 27 '18 at 22:31