46

I have created a web application that among other things allows users to write, compile and execute code (Java, C#). The application creates a Docker container for every user where compilation and code execution takes place. I have taken the following measures to secure the container:

  • This container has no persistent or shared data.
  • It does not have access to the docker API (which is secured with TLS).
  • There is no information within the container the user shouldn't know about.
  • The user will not be aware that the compiler is within a container.

Can I consider this container safe to run untrusted code in? Are there any known ways to affect the host machine from within the container in a configuration like this?

nz_21
  • 103
  • 3
Hartger
  • 571
  • 1
  • 5
  • 7

3 Answers3

43

tl;dr: container solutions do not and never will do guarantee to provide complete isolation, use virtualization instead if you require this.

Bottom up and top down approaches

Docker (and the same applies to similar container solutions) does not guarantee complete isolation and should not be confused with virtualization. Isolation of containers is achieved through adding some barriers in-between them, but they still use shared resources as the kernel. Virtualization on the other hand has much smaller shared resources, which are easier to understand and well-tested by now, often enriched by hardware features to restrict access. Docker itself describes this in their Docker security article as

One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.

Consider virtualization as a top-down approach

For virtualization, you start with pretty much complete isolation and provide some well-guarded, well-described interfaces; this means you can be rather sure breaking out of a virtual machine is hard. The kernel is not shared, if you have some kernel exploit allowing you to escape from user restrictions, the hypervisor is till in-between you and other virtual machines.

This does not imply perfect isolation. Again and again, hypervisor issues are found, but most of them are very complicated attacks with limited scope that are hard to perform (but there are also very critical, "easy to exploit" ones.

Containers on the other hand are bottom-up

With containers, you start from running applications on the same kernel, but add up barriers (kernel namespaces, cgroups, ...) to better isolate them. While this provides some advantages as lower overhead, it is much more difficult to "be sure" not having forgotten anything, the Linux Kernel is a very large and complex piece of software. And the kernel itself is still shared, if there is an exploit in the kernel, chances are high you can escape to the host (and/or other containers).

Users inside and outside containers

Especially pre-Docker 1.9 which should get user namespaces pretty much means "container root also has host root privileges" as soon as another missing barrier in the Docker machine (or kernel exploit) is found. There have been such issues before, you should expect more to come and Docker recommends that you

take care of running your processes inside the containers as non-privileged users (i.e., non-root).

If you're interested in more details, estep posted a good article on http://integratedcode.us explaining user namespaces.

Restricting root access (for example, by enforcing a non-privileged user when creating the image or at least using the new user namespaces) is a necessary and basic security measure for providing isolation, and might give satisfying isolation in-between containers. Using restricted users and user namespaces, escaping to the host gets much harder, but still you shouldn't be sure there is just another way not considered yet to break out of a container (and if this includes exploiting an unpatched security issue in the kernel), and shouldn't be used to run untrusted code.

AndrolGenhald
  • 15,436
  • 5
  • 45
  • 50
Jens Erat
  • 23,446
  • 12
  • 72
  • 96
  • 1
    Thank you. Unfortunately virtualization is not an option due to the setup time. I need these compilers to be set up within a couple of seconds. I will set up a restricted user within the container. The host machine doesn't contain any valuable data (passwords etc.) so a compromise wouldn't be catastrophic. I will consider using docker as an acceptable risk. – Hartger Dec 11 '15 at 11:32
  • 2
    @Hartger are you also accepting the risk that when the machine will be compromised, it will be used for lots of illegal activities while you are still legally responsible for it? If so, good luck. – lorenzog Dec 11 '15 at 12:35
  • 1
    sorry but you should have a citation for "container root also has host root privileges" . it's not trivial or indeed always possible to escape a containerized setup – Rory McCune Dec 13 '15 at 11:36
  • Reading the paragraph again I agree that I did not quite get the point I wanted to make. Indeed I didn't make it clear enough from the surrounding tone of the post that this statement as-is is not valid at all the time. While this is a bold statement (and still is in the reworked revision), I still claim running stuff as root inside containers is no viable approach if you want to have a long-running, stable environment without the fear of the next available kernel/Docker exploit. Do you agree on the reworded statement? – Jens Erat Dec 13 '15 at 12:32
  • There are a number of large, established companies offering container-based VPS services. Their whole business model is based on running untrusted customer code in containers (though none that I know of use docker specifically). Are you suggesting that their entire business model is based on an inherently insecure and impossible-to-secure system? – Josh May 03 '16 at 18:47
  • You state that containers are inherently more open to the possibility of exploitation than virtualization, which absolutely makes sense to me. however, this begs the obvious follow-up question: are containers in general (or docker specifically) inherently any more or less secure than simply running untrusted code as an unprivileged user? – Josh May 03 '16 at 18:52
  • Docker (and containerization in general) definitely adds additional restrictions and isolation, which are always a good thing to have considering security (no matter whether the code you're running is trusted or not). On the other hand, Docker also adds complexity to the setup in general, which might introduce new problems. But all in all, saying "better run untrusted code in a container _without root privileges_ is safer than running it "bare metal" in an unprivileged account. Whether running it in a _privileged_ container user account is safer or not, depends on available exploits. – Jens Erat May 07 '16 at 07:39
  • Considering running containers: lots of them actually run virtual machines that each run a container, only considering containers as a distribution channel of code, not means of virtualization. If they let you run containers directly, this is to be considered _potentially_ more dangerous than "real" virtualization, it might be considered an acceptable risk. To decide whether it is or not depends on individual use cases, but also consider also virtualization is not guaranteeing 100% isolation. – Jens Erat May 07 '16 at 07:44
15

Whilst the answer from @jens-erat has the correct high-level point that virtualization provides superior isolation to containerization solutions like docker, it is not a black and white setup.

On the one hand there have been a number of guest --> host breakouts in virtualization technology (for example the "Venom" vulnerability in virtual floppy device drivers) so like any security control the isolation provided by virtualization is not 100%.

On the perspective of hardening your docker installation to improve the isolation and reduce the risk posed, there are a number of steps you can take to help secure your docker installation.

  1. Docker has some good security guidance available on hardening. There's a (slightly out of date) CIS Security Guide and also docker bench which can be used to review configurations

  2. Depending on how your application operates (i.e. how does the code get on there for compilation) you can modify the operation of docker to reduce the chances of malicious activity. For example, assuming that the code gets on there at the host level, you may be able to deny network access to the container (--net none switch on docker run). You can also look at whether you can drop additional capabilities to reduce what the process running in the container can do.

  3. Consider using AppArmor profiles to restrict resources. AppArmor can be used to restrict what can be done in the container and you can use tools like bane to generate profiles for your applications.

  4. Also I would recommend implementing some monitoring at the host level to look for possibly malicious access. As you know what the containers should/should not be doing, having some relatively strict monitoring would alert you to any possible break-out

Another area that could be productive to harden this kind of setup is to use stripped down host OS's and container images. The less code exposed, the smaller the attack surface. Something like CoreOS or Ubuntu Snappy Core might be worth looking at

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
1

My solution would be something like SmartOS. Reason being SmartOS supports Docker, KVM and zones. Thus you could use these in combination to prevent malicious code executing beyond a Docker container. After all, Docker containers are still files on a filesystem.

schroeder
  • 123,438
  • 55
  • 284
  • 319