5

This is a bit of a tough question to ask so bear with me a bit. I'll try to be brief if I can.

Problem statement

I'm looking for a modern best practice for managing and deploying the trusted root CA certificate list of Docker containers. I know I can bake the certificates into each container through a Dockerfile. However, that is a new docker image for each container every time we need to make a certificate or CRL change. Ideally in the spirit of "cloud native" (see 12 factor apps) our containers would have no certificates and CRLs baked into them and this would all come from the environment somehow.

To give a bit more context. I'm running these containers in Kubernetes, but they could run in any container platform such as OpenShift, AWS, etc. and ideally my objective here is to come up with a single solution that allows for true container portability.

Potential solutions

Some ideas I have toyed with:

1) create another container that has all of the certs and CRLs and volume mount that to each container. - This is a common approach but requires the container to be saved to a new image every time a new cert or CRL has been updated. Not awful, but it doesn't feel very "cloud native".

2) Use the Spring Config Server and build a little Spring Boot sidecar that installs all of the certs at startup. - The external centralized config feels cloud native. - May require heavy modification to the start order of services in the container that may require those certs. - Feels too complicated for a container.

3) Use a proxy where all certs are managed. - Feels like a hack to force all traffic through the proxy. - Possible throughput and contention issues when there is heavy loads to HTTPS resources. - May not be feasible to route everything over straight HTTP.

4) Use environment variables and have a start up script that installs them - Simple and generalized approach for any type of container deployed anywhere. I'm not sure if it's appropriate to do it this way. Those would be a lot of big environment variable values. Can you imagine what a "printenv" will look like?

5) Use Kubernetes Secrets (not sure this can work yet) - It's a kubernetes only solution which is a downer for write once, use anywhere.

I know this is a bit long winded. I apologize for that. I didn't know how to compress it any further. I am looking forward to the discussion on this.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • Great question! Can you expand on what the certs/CRLs are used for and how early they're needed in the boot-up process? ie are these TLS server certs that need to be available to nginx when it starts, vs client certs that allow the docker image to join the network (in which case they _need_ to be there prior to any network action)? Also, is there one cert / private key that you are cloning across instances, or can each instance contact a CA and enroll for a new cert on startup? – Mike Ounsworth Aug 13 '18 at 22:07
  • Have you considered putting the private key in an HSM (on a physical server you could also do a USB smartcard) and then putting the password to access the HSM into the docker image? Each cloud provider offers some sort of HSM-backed centralized key-management solution, though APIs probably differ between vendors. – Mike Ounsworth Aug 13 '18 at 22:21
  • It's a tough question to ask and I think I just came up with a solution shortly after posting that works well enough for us that I might roll with for now. These certs are the DoD Root certs and for dev/test/other environments are self signed server certs (or private CAs) that are necessary for our Docker containers to talk to other parts of our deployment or external to our software. The solution we're planning on using is to grab CAs and CRLs from tar balls posted somewhere. This mimics DoD networks and also can be bootstrapped in various ways depending on our environment. – Scott Lindner Aug 13 '18 at 22:34
  • Great. If you're allowed to post the key parts of your solution, that would make a good self-answer :) – Mike Ounsworth Aug 13 '18 at 22:36
  • As for the HSM solution. We deploy our software to tons of various environments. We can never be guaranteed one cloud provider's solution will be available at another. So I'm looking to do things within the container if possible so we have more generalized solutions. Knowing that may not always be the case. – Scott Lindner Aug 13 '18 at 22:37
  • I will at a minimum describe my solution once it is working and call it a my own self answer. I have been searching the Internet for good ideas on this for over a week and haven't come up with much compelling. So my thinking for now is to user supervisord to grab and install these tar ball certs and CRLs at container start before the main service is started. I will have to play around with start delays or lockfiles to make sure there aren't timing issues. – Scott Lindner Aug 13 '18 at 22:39
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/81610/discussion-between-mike-ounsworth-and-scott-lindner). – Mike Ounsworth Aug 13 '18 at 22:44
  • If you're using Jenkins or similar in your CI/CD pipeline, one option might be to use that to automate rebuilds based on updates to the CA store. As you need a process for regular container rebuilds anyway, to ensure that patches are built into the images, it seems like one option to leverage that existing process to embed certificates into the images. – Rory McCune Aug 14 '18 at 14:09
  • This is a problem we've considered building a native solution for in Kubernetes. See and feel free to chime in on https://github.com/kubernetes/kubernetes/issues/63726 – Tim Allclair Aug 14 '18 at 22:19

3 Answers3

1

A solution that emerged from chat / comments:

  • Post a tarball of what the trust store should be at some static URL -- you have the ability to update this at your convenience.
  • For the TLS cert on this URL, use a cert that the docker container will trust in its default configuration -- from a publicly-trusted CA or a private root CA explicitly for this purpose that you manually add to the docker template. Let's call this the bootstrapping CA.

Where you need to put the bootstrapping CA cert depends on how you're fetching the tarball. For example, with curl:

$ curl -O https://my.server/docker_root_cas.pem --cacert [file]

Since CA certificates are public information, just drop the CA cert anywhere on the filesystem (of course making sure an attacker can't modify it pre-bootup. But if that were true, you'd have bigger problems...)

You have now securely fetched the list of trusted CA certs. This also leads to an easy update mechanism by having containers periodically pull the list again either from this URL or from one protected by a TLS cert that chains to one of the CAs that you just imported.


Security considerations

  • The docker image needs to be protected against an attacker replacing the bootstrapping CA file with their own.
  • The server hosting the tarball becomes a big target, so harden it well. Doubly-so if you have an update mechanism where containers continue to pull updated trust stores periodically because now a compromise of the tarball server will compromise all new and all existing containers in one fell swoop.
Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
1

I have come to a conclusion that works well for our needs that I want to offer as an answer to my own question. We have implemented a shell script that we are injecting into our containers that can be enabled and configured via environment variables. What this script does is downloads a tar ball or zip of all of the trusted CA certificates (and self signed certs for dev purposes) to install at docker container startup. What this does is allows us to create a single container that can deploy anywhere without baking the certs into the container. Furthermore, this adds the benefits of solving a certificate and CRL distribution problem. Since we are striving for our containers to be as 12 factor compliant as possible (aka cloud native) the containers can be scheduled to be restarted on a periodic basis which will effectively serve the purpose of distributing our trusted certificates and CRLs.

I have a proof of concept of this working and it works extremely well. For those supporting the DoD and IC this has one additional advantage in that the approved certificates and CRLs for the trusted networks are already maintained and posted on the network such that you can point this script directly at those official tar balls to distribute your certificates and CRLs.

I am using supervisord in our containers and have configured all [program]s to not autostart with the exception of a bootstrap.sh that first runs the certificate installation script and then uses supervisorctl start [program] to start the programs defined in supverisord.conf. Which bootstrap.sh is the only program in supervisord.conf that autostarts. This ensure the certificates are in place before anything else starts.

One nitty detail that may be essential for those looking to implement a similar solution. We are also optionally passing in a certificate into the container as an environment variable to be used only for trusting the certificate tar ball URL that is also injected via an environment variable. This solves the potential Catch 22 of needing a new certificate to obtain the certificate tar ball.

0

I hope this answer is not to late ! One of the solutions for managing secrets in the cloud native world is by using Vault. What Vault does is to securely retain anything called a secret e.g. crypto keys, API keys, x509 certificates in a secure way that permits machine to machine communication such by API calls. According to this link, Vault can manage x.509 certificates, there is a detailed description of a possible deployment. Several aspects of key management are possible e.g. enabling, changing and revocating. Vault is compatible with docker containers and kubernetes, infact it is also recommended by the Cloud Native Foundation (CNCF), the leading organisation fostering the cloud native movement.

SyCode
  • 200
  • 8