2

I've started using kustomize. It lets you generate secrets with something like:

secretGenerator:
  - name: mariadb-env
    envs:
      - mariadb.env

This is great because kustomize appends a hash so that every time I edit my secret, kubernetes will see it as being new and restart the server.

However, if I put kustomization.yaml under version control, then it kind of entails that I put mariadb.env under version control too. If I don't, then kustomize build x will fail because of the missing file [for anyone that tries to clone the repo]. Even if I don't put it under VCS, it still means I have these secret files on my dev workstation.

Prior to adopting kustomize, I'd just create the secret once, send it to the kubernetes cluster, and let it live there. I could still reference in my configs by name, but with the hash, I can't really do that anymore. But the hash is also incredibly useful for forcing the restart.

How are people dealing with this?

mpen
  • 313
  • 5
  • 14
  • So the issue is mostly on how to make `kubernetes build x` working for everyone who's cloning the repo? Just curious if you can put a placeholder in VCS and ignore the local file to be sent to vcs ? BTW, did you find any solution? – Nick Jul 03 '20 at 09:44
  • @Nick That should have said `kustomize build x`, but yeah..roughly that's the problem. I'm actually a 1-man shop, so cloning isn't a huge deal, but I still don't like having secrets lying around my dev machine. I try to do everything properly. And yes, right now I've just put them in .hgignore so I don't accidentally commit them. I still don't know what a good solution is except take them *out* of `secretsGenerator`/make them static, send them to my kubernetes cluster and delete them locally. – mpen Jul 04 '20 at 19:28

2 Answers2

2

The answer tends to involve encryption of the secrets. This is an overly simplistic so let's explore that further, because different solutions approach the problem quite differently.

There are (as my current research is telling me) largely two or three common tools for this:

  • Sealed Secrets
  • Helm-Secrets (which is Helm-specific, but uses Mozilla SOPS, which is just about secrets and YAML etc., and thus not Kubernetes specific)
  • KSOPS (which is a Kustomize plugin to use SOPS)

I don't have any practical experience with these at this time, but I can tell you that one of the biggest differences is that with Sealed Secrets, the decryption is done inside the cluster at the time when the SealedSecret object enters the system. The keying material lives inside the cluster. The encrypted SealedSecret (a type of CRD) can live happily in Git with whatever confidence you have in the encryption. Developers don't require access to the private-key, but can't view the contents of the secret [without it].

Compare this to SOPS, whereby the decryption occurs on the client-side, and the secret will generally be maintained in some external vault, such as HashiCorp Vault, Amazon KMS, or similar.

Comparing the two, one significant aspect would appear to be the use of public CI/CD infrastructure, and which parties would need to handle your secrets; this is one area in which Sealed Secrets has an advantage, but there is pros and cons either way, particularly in terms of what access all/some of your developers might require.

Unsurprisingly, both SOPS (in one of its guises) and Sealed Secrets are both widely used, which explains why the likes of Kuberenetes (or even opinionated distributions such as OpenShift) don't come with any particular opinions with regard to how to do it; which is frustrating!

If you're familiar with Ansible Vault, you could use SOPS with a PGP key in a similar way, potentially alongside the use of other KMS solutions.

Cameron Kerr
  • 3,919
  • 18
  • 24
1

I've been checking on topic and I think that you might want to check overlays. An overlay is just another kustomization, referring to the base, and referring to patches to apply to that base.

This arrangement makes it easy to manage your configuration.

If we are speaking about some VCS, the base could have files from an upstream repository managed by someone else. The overlays could be in a repository you own. That is explained here.

As a result you won't need storing mariadb.env under version control. You can if you like, but that could be a local set of files as well.

Additionally, you might find interesting ideas in an overview of "Branch Structure Based Layout" (but that concep focuses on VCS and you told that you'd like to have only a part of your code in VCS).

Edit. Vault and auto restart pod after secret update

Nick
  • 151
  • 7
  • I do own the repo, and it is private, so I guess the risk of a public leak is minimal, but I still don't love putting secrets in the repo at all. Supposing my company were to expand, not everyone should have access to those secrets just because they have access to the repo. But `mariadb.env` does have to live somewhere, just somewhere with limited access would be ideal. – mpen Sep 07 '20 at 22:48
  • Can I ask which exactly options you are passing via mariadb.env? I mean ones that aren't the default one. As far as I remember if was possible to set some of them upon mariadb container creation. – Nick Sep 07 '20 at 23:03
  • For `mariadb.env`, just `MYSQL_ROOT_PASSWORD`. This question is a little broader than just MariadB though. My app has all kinds of secret keys; for stripe, recaptcha, sentry, google maps, mailgun, ... – mpen Sep 07 '20 at 23:47
  • 1
    edited my answer. It looks like Vault might be an answer – Nick Sep 09 '20 at 13:24