58

When it comes to Docker, it is very convenient to use a third party container that already exist to do what we want. The problem is that those containers can be very complicated and have a large parent tree of other containers; they can even pull some code from repositories like GitHub. All of this is making a security audit harder.

I know it could sound naive, but could it be easy for someone to hide some malicious content in a container? I know that the answer is YES but I would like to know in which dimension, and if it's worth the risk. I'm a familiar with GitHub, and I usually take a look at the source-code when I use third party code (unless it's a well known project.)

I am wondering if the community is watching for those kinds of behavior because the harm of a malicious container could be bigger than malicious code.

How likely is a container to be malicious? (Considering it's a popular one.) As well, what dimensions could damage/use the other components of the underlining system or the others systems on the LAN ? To be even simpler, should I trust them?

Edit: I found an article from Docker that brings a bit of light in Docker security and best practices: Understanding Docker security and best practices .

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
0x1gene
  • 783
  • 1
  • 6
  • 10
  • 2
    Hi Ox1gene. Your question is a bit too broad to be answered easily. Questions such as "Should I trust" or "Is X secure enough" systematically lead to the answer "It depends (on your threat model)". This being said Docker containers are based on pretty solid (but not infallible) security mechanisms. They should probably not be more or less trusted than other confinement mechanisms, unless you know exactly what you're talking about and what you're protecting from. As for container distribution, it suffers from the same trust/risk issues as any other types of code bundles. – Steve Dodier-Lazaro May 08 '15 at 12:49
  • http://www.infoq.com/news/2015/05/Docker-Security-Benchmark – atdre May 08 '15 at 16:52
  • There is also [a project](https://coreos.com/blog/rocket/) from CentOS that aims at better security. – Petr May 09 '15 at 18:49
  • It looks like the isolation security is similar to LXC. See also: https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal https://www.reddit.com/r/docker/comments/mb1ahw/is_it_possible_for_a_dockerfile_not_an_image_to/ https://www.reddit.com/r/homelab/comments/9uhsnq/security_docker_containers_vs_lxc/ – baptx Jan 31 '22 at 19:01

8 Answers8

38

At the moment there is no way to easily work out whether to trust specific docker containers. There are base containers provided by Docker and OS providers which they call "trusted" but the software lacks good mechanisms as yet (e.g. digital signing) to check that images haven't been tampered with.

For clarification to quote the recently released CIS security standard for docker section 4.2

Official repositories are Docker images curated and optimized by the Docker community or the vendor. But,the Docker container image signing and verification feature is not yet ready.

Hence, the Docker engine does not verify the provenance of the container images by itself.

You should thus exercise a great deal of caution when obtaining container images.

When you get into the world of general 3rd party containers from Docker hub, the picture is even trickier. AFAIK docker do no checking of other peoples container files, so there's a number of potential problems

  • The container contains actual malware. Is this likely, no one knows. Is it possible, yes.
  • The container contains insecure software. Dockerfiles are basically like batch scripts that build a machine. I've seen several that do things like download files over unencrypted HTTP connections and then run them as root in the container. For me that's not a good way to get a secure container
  • The container sets an insecure settings. Docker is all about automating set-up of software which means that you are, to an extent, trusting all the people who made the dockerfiles to have configured them as securely as you would have liked them to.

Of course you could audit all the dockerfiles, but then once you've done that you'd almost have been better just configuring the thing yourself !

As to whether this is "worth the risk", I'm afraid that's a decision only you can really make. You are trading off the time needed to develop and maintain your own images, against the increased risks that someone involved in the production of the software you download will either be malicious or have made a mistake with regards to the security of the system.

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
  • 2
    _Of course you could audit all the dockerfiles, but then once you've done that you'd almost have been better just configuring the thing yourself !_ Isn't it even so, that if you audit all dockerfiles you can't be sure that the docker image is just the result of what has been defined in the dockerfiles. Couldn't afterwards even more commands been run on the images itself, without a mention in the dockerfile? – Sebo Sep 19 '19 at 18:20
  • This answer is now 5 years old. Could you check if it is still up-to-date? – Martin Thoma Mar 24 '20 at 08:23
8

Trust it as much as any unsigned code that you run on your systems. Containers are just processes with some extra namespace protections on them, so that's all the protections they get. They still talk to the same kernel underneath.

Marcin
  • 2,508
  • 1
  • 15
  • 14
7

It's best to consider a Docker container to be the same as running an application on the host system. There are some attempts to lock down the Docker daemon by removing Linux Kernel capabilities, but this is not really a guarantee. If you do run Docker, there are a few things you can do to help mitigate some of this risk.

  • SELinux - Enabling this will automatically generate an MCS label for each container, limiting its ability to do damage.
  • Read-Only - You can also mark the container read-only which can allow you make large portions of the container's image read-only, which can make it harder for an attacker to deploy malware.
  • Self-Hosted Registry - To reduce the risk of image tampering, loading malicious containers, leaking secrets, or otherwise putting yourself at risk you can host a registry internally. https://github.com/dogestry/dogestry is an example of one which sits on top of S3, though there are other options as well.
theterribletrivium
  • 2,679
  • 17
  • 18
6

In essence, I argue it is the same question as whether open source software is trustworthy. But I think the risk of using community Docker containers is somewhat higher at present than the risks of using open source software.

First, as you mentioned, there is no signing and verification now. Good open source packaging systems today include this, at least when obtaining software from official repositories. And even one-off projects tend to include checksums in download bundles. So in the open source world, you don't know the code is safe, but you often know you're getting the code you're supposed to get. With Docker, you don't even know the container is unaltered between publication and your download of it.

Second is the issue of the package itself. Are you sure the software is not doing something nasty, like reporting your activities to some Internet destination? This used to be a common fear of open source software. Nowadays, many large enterprises do not question technical implementers who incorporate open source software. Arguably, closed source software could be worse in this way. But for a Docker container, especially one that includes a full set of operating system tools and libraries, the "attack surface" is so much bigger. If you think you might be using a bad build of postfix, just get the official code and build it (and some package managers do this normally). If you think you have a bad Docker container, it's often a bit of an adventure to reproduce the image "from source".

wberry
  • 624
  • 3
  • 6
4

Can you trust SECURITY docker containers? I think the answer to this must almost always be NO!

In my case, I'm wondering about 'linuxserver/openvpn-as'. Gee, wouldn't it be nice to just pop that thing into one of my docker swarms, open it up to my private networks, and let it manage remote user access to those networks. But how can I trust something like this to any container I get off the web that has no provenance? Without that, I don't think I can.

If I had provenance, then I would just have to trust the creator to have

  1. started with something equally trustable (and so on and so on),
  2. not done something malicious, and
  3. not made an unsafe choice for an install or configuration step.

This is a pretty tall order in and of itself. In this case, I have to trust linuxserver.io. Never heard of them. But looking at them, it seems their entire job is to create containers. So they're probably really good at that. And this container has supposedly been downloaded from DockerHub over 500K times. Sounds like something pretty safe.

So I could probably feel pretty good if I could be sure that the image I'm getting

  1. was created by linuxserver.io, and
  2. is in fact THE image that has truly been downloaded 500K times.

Well, first of all, (2) isn't true, right? That's counting all versions of the container, I believe. So maybe the container has been safe for years, but someone JUST released a version with a serious security hole in it. And then there's (1). That's the real stinker. How many other mechanisms do I have to trust (DockerHub, DockerHub's hoster, the internet infrastructure, ...) to be sure that the original source code for the container that linuxserver.io considers to be the source for the container truly fully defines the container that I'm actually using. There's no way I can know that. And, really, I'd have to know that not just about this container, but about all of the sub-containers used to create it. So I can't possibly use this container.

This is an extreme case, but probably not so for any container involving network security. I expect that many of those 500K consumers that actually used this image did so recklessly, by no fault of linuxserver.io.

Docker needs a full container provenance mechanism. Even then, there's a huge amount of trust to muster here. Maybe too huge. Maybe security software simply isn't containerize-able.

Jens Erat
  • 23,446
  • 12
  • 72
  • 96
2

You can build trust in the source by a quick investigation but a more fundamental concern is the relative immaturity of the overall security profile as suggested by the need to use root access to run your container.

Since you suggest we focus on popular solutions let's consider that we are using a controlled Git based repository like Docker Hub to pull down a popular vendor-supplied product. The Git mechanisms provide a good layer of trust. If you trust the named provider then you can trust their Docker product. If you remember a few years ago GitHub was compromised but the source code was fine due to Git's integrity signature mechanism and publishing controls. Those features protect Docker published files as well if you are using popular vendor products.

The Dockerfile that constructs your container can reach out and download tar files etc that are not hosted on trusted Git repositories. A simple check of that text file, the Dockerfile, can build trust in that space.

The overall security mechanisms are very young so please consider their vulnerabilities in addition to issues of source control. From their documentation on security:

There are three major areas to consider when reviewing Docker security:

  • the intrinsic security of the kernel and its support for namespaces and cgroups
  • the attack surface of the Docker daemon itself
  • loopholes in the container configuration profile, either by default, or when customized by users
  • the "hardening" security features of the kernel and how they interact with containers

I think the fact that their front page on security says there are three major areas and then lists four is yet another indication that things are in flux in that space. It appears to be a fantastic solution but may need some intelligent hardening with user provided perimeters and policies in the near term.

zedman9991
  • 3,377
  • 15
  • 22
  • I think you might be confusing integrity signing with cryptographic signing. Git does provide integrity checks to prevent corruption, but it does not provide cryptographic signing of a release to prevent tampering. – Rory McCune May 08 '15 at 14:08
  • I think the GitHub hack of 2012 demonstrates that value of the signing mechanism. You may well be correct that I have used confusing terms, I will review. Tampering on GitHub was not an issue. Correct? – zedman9991 May 08 '15 at 14:13
  • direct modification of the git repos wasn't I think an issue there, but that doesn't mean you can rely on software from github, if someone steals the creds of a developer they can just push a new version to the repository. crypto signing of a release helps to mitigate that risk and ensure you're getting the software you intend. – Rory McCune May 08 '15 at 14:15
  • if you see my edit above, the docker people themselves state that container signing isn't in place as yet, so I think that's also the meaning they're looking at... – Rory McCune May 08 '15 at 14:20
  • Rory, corrected. My apologies for misspeaking in reference to your and others posts. – zedman9991 May 08 '15 at 14:23
  • no worries, they're similar meaning/sounding terms so easy to mix up.. – Rory McCune May 08 '15 at 14:25
  • Rory - containers are not signed but Docker Hub prevents others from pretending to be vendors and pushing to their release branches. Git version control is central to monitoring that all is working as expected. If someone steals credentials signing is still trusted? – zedman9991 May 08 '15 at 14:28
  • well stealing credentials (username/password) doesn't provide necesarily the same level of compromise as stealing signing keys. Ideally signing keys should be held offline and use to sign releases. this increases the trust you can place in the software you receive. – Rory McCune May 08 '15 at 14:31
1

With Docker specifically, in my experience, you can trust the vast majority of stuff out there in the open source community (like stuff on Github) to not be deliberately malicious. You can read the Dockerfile, and verify it's pulling in code from official repos if any (versus using some random person's fork). If it's pulling in code from somewhere strange, of course you can always go take a look at what is different from the official repo in that specific fork. This is where you get into malicious software that isn't intentionally malicious. It's just crap code or horrible configuration or equivalent. In my experience using Docker, code that uses unofficial forks should be avoided, unless the fork provides a specific feature you are looking for. The official repo is going to be updated more often than the fork, almost ubiquitously. Also, Docker uses what are called "trusted builds" so that you know you're getting what it says you're getting on the tin. Finally, Docker itself has had vulnerabilities. It sounds like you have the right mindset to give yourself a gut feeling if something seems wrong.

In fewer words, generally if your Docker resources pull FROM an official build, and from official repos, this is about as safe as you can get using software. Docker itself has had its share of vulnerabilities, but as long as you stay on top of patching your infrastructure, you'll do alright.

L0j1k
  • 111
  • 3
0

Assume the default stance of not trusting anything you want to bring into your environment from the outside.

If it is something you really want to use, minimize the risk as much as possible by sequestering it, analyzing it, and making sure it will not do any harm.

Give it as little access to your environment as possible in order to let it do what you need it to do.

Check up on it. Update it, and make sure the updates don't introduce new risk.

willc
  • 652
  • 3
  • 9
  • 1
    "sequestering it" how?, "analyzing it" how?, "and making sure it will not do any harm" and how? "Give it as little access to your environment as possible" again, how!? Your answer is superficial, makes difficult tasks sound trivial and seems to ignore that Docker containers already are a confinement tool performing much of the above. – Steve Dodier-Lazaro May 08 '15 at 13:08
  • 2
    I answered as succinctly as possible given the question being asked. Since no information was given about the environment, setup, polices, users, or anything else, no answer could reasonably be given without assuming a lot, and therefor increasing the likelihood that it is incorrect. – willc May 08 '15 at 16:49