307

I recently had a discussion with a Docker expert about the security of Docker vs. virtual machines. When I told that I've read from different sources that it's easier for code running within a Docker container to escape from it than for a code running in a virtual machine, the expert explained that I'm completely wrong, and that Docker machines are actually more secure in terms of preventing the malicious code from affecting other machines, compared to virtual machines or bare metal.

Although he tried to explain what makes Docker containers more secure, his explanation was too technical for me.

From what I understand, “OS-level virtualization reuses the kernel-space between virtual machines” as explained in a different answer on this site. In other words, code from a Docker container could exploit a kernel vulnerability, which wouldn't be possible to do from a virtual machine.

Therefore, what could make it inherently more secure to use Docker compared to VMs or bare metal isolation, in a context where code running in a container/machine would intentionally try to escape and infect/damage other containers/machines? Let's assume Docker is configured properly, which prevents three of the four categories of attacks described here.

esote
  • 371
  • 2
  • 12
Arseni Mourzenko
  • 4,644
  • 6
  • 20
  • 30
  • 49
    I always thought of Docker as somewhere in the middle between full VMs and running code directly on the host OS - you exchange security for performance and flexibility. That's also why I kept my team away from it despite the hype, at least until the tech matures a bit more. – T. Sar Sep 18 '17 at 17:13
  • 12
    One possible minor security advantage for Docker: containers should ideally have very few executables/services present. It is possible for the containers to even lack a shell. This can greatly reduce the attack surface, especially for attack vectors that rely on tricking the app into running other executables. This is obviously also possible in a VM, but in practice those tend to use configuration management tools like chef/puppet, which in turn mean you usually have package management tools, common *nix utilites, etc installed too. – Kevin Cathcart Sep 18 '17 at 18:47
  • 42
    A self professed expert on any technology may be consciously or unconsciously biased in favor of that technology. Merely devoting a lot of time learning about something can cause one to see that something more favorably. – Todd Wilcox Sep 18 '17 at 23:31
  • 10
    Is this Docker expert someone who works at Docker, or just someone who's familiar with the technology? – user541686 Sep 19 '17 at 09:21
  • It is always a trade of between security and flexibility. that's what you always should be aware of. – Anton Sep 20 '17 at 06:00
  • 4
    @T.Sar Even if Docker doesn't provide as much isolation as separate VMs, there are a number of situations where it adds enough security to be a worthwhile part of a defence-in-depth strategy, or where it doesn't add any security, but brings some other benefit. – James_pic Sep 20 '17 at 13:39
  • @James_pic The issue I have with Docker is that some nasty worm finds a kernel vulnerability, you can kiss your whole system bye-bye. That doesn't happen as easily with VMs - While escaping from one is not impossible, usually the worm or virus just stays inside contained. Docker is good, it just doesn't fulfill my specific needs. – T. Sar Sep 21 '17 at 11:58
  • @T.Sar Agreed. But using Docker within VMs is reasonably common, and is usually more secure than having multiple uncontainerised processes sharing a VM (which is sometimes the alternative), or at least can be beneficial in ways that are orthogonal to security. – James_pic Sep 21 '17 at 16:08
  • 1
    @James_pic I wouldn't have anything against Docker inside a VM! That is a nifty idea that doesn't sound as bad as the VM-in-a-VM that a former workplace used. I'll honestly consider this! – T. Sar Sep 21 '17 at 17:59
  • 5
    @ToddWilcox “If you meet the Buddha, kill him.”– Linji – Paul Sep 21 '17 at 18:26
  • 1
    @Carrosive See [Did Einstein say “if you can't explain it simply you don't understand it well enough”?](https://skeptics.stackexchange.com/q/8742/30357) – Marc.2377 Sep 23 '17 at 04:41
  • I hope you weren't paying for this expert's expertise, because they are completely wrong. VMs are unquestionably more secure than containers. – Gaius Sep 24 '17 at 12:24
  • 2
    "What makes it more secure?" Your imagination. The collective imagination. Its absolutely, demonstrably *not* more secure. – zxq9 Sep 26 '17 at 21:47

9 Answers9

459

No, Docker containers are not more secure than a VM.

Quoting Daniel Shapira:

In 2017 alone, 434 linux kernel exploits were found, and as you have seen in this post, kernel exploits can be devastating for containerized environments. This is because containers share the same kernel as the host, thus trusting the built-in protection mechanisms alone isn’t sufficient.

1. Kernel exploits from a container

If someone exploits a kernel bug inside a container, they exploited it on the host OS. If this exploit allows for code execution, it will be executed on the host OS, not inside the container.

If this exploit allows for arbitrary memory access, the attacker can change or read any data for any other container.

On a VM, the process is longer: the attacker would have to exploit both the VM kernel, the hypervisor, and the host kernel (and this may not be the same as the VM kernel).

2. Resource starvation

As all the containers share the same kernel and the same resources, if the access to some resource is not constrained, one container can use it all up and starve the host OS and the other containers.

On a VM, the resources are defined by the hypervisor, so no VM can deny the host OS from any resource, as the hypervisor itself can be configured to make restricted use of resources.

3. Container breakout

If any user inside a container is able to escape the container using some exploit or misconfiguration, they will have access to all containers running on the host. That happens because the same user running the docker engine is the user running the containers. If any exploit executes code on the host, it will execute under the privileges of the docker engine, so it can access any container.

4. Data separation

On a docker container, there're some resources that are not namespaced:

  • SELinux
  • Cgroups
  • file systems under /sys, /proc/sys,
  • /proc/sysrq-trigger, /proc/irq, /proc/bus
  • /dev/mem, /dev/sd* file system
  • Kernel Modules

If any attacker can exploit any of those elements, they will own the host OS.

A VM OS will not have direct access to any of those elements. It will talk to the hypervisor, and the hypervisor will make the appropriate system calls to the host OS. It will filter out invalid calls, adding a layer of security.

5. Raw Sockets

The default Docker Unix socket (/var/run/docker.sock) can be mounted by any container if not properly secured. If some container mounts this socket, it can shutdown, start or create new images.


If it's properly configured and secured, you can achieve a high level of security with a docker container, but it will be less than a properly configured VM. No matter how much hardening tools are employed, a VM will always be more secure. Bare metal isolation is even more secure than a VM. Some bare metal implementations (IBM PR/SM for example) can guarantee that the partitions are as separated as if they were on separate hardware. As far as I know, there's no way to escape a PR/SM virtualization.

ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
  • 1
    One correction though - modern hypervisors do not pass system calls to host OS (with the exception of disk IO as I recall), that was true long time ago, right now most hypervisors employ HW virtualization capabilities which significantly improves performance, while maintains same level of isolation, simply put - guest system directly uses HW on host system, but the resource limits are set by hypervisor. – Alexey Kamenskiy Sep 18 '17 at 05:59
  • 28
    in General I'd agree that VMs are more likely to be secure, due to smaller attack surface, however in regards to point 1. in reality VM escapes don't have to exploit the VM kernel, they tend to attack the device drivers which are provided by the hypervisor and the attack chain can be pretty simple. E.g would be venom which exploited the floppy disk driver and allowed for VM escape. – Rory McCune Sep 18 '17 at 10:02
  • 2
    You can set resource limits and quotas for docker containers. Yes it's true that by default containers have unrestricted quotas, while resource limits are forced to be specified ahead of time with VMs, but I don't think that having different default is necessarily a minus point here. – Lie Ryan Sep 18 '17 at 16:28
  • 10
    I think #5 is a strange point to be there. Nobody in their right mind would forward their `/var/run/docker.sock` to untrusted containers, just as nobody in their right mind would route their VM Hypervisor API endpoint to untrusted VMs or run untrusted kernels as Xen dom0. Being able to run management applications within VMs are features that can be done with VM as well. Software is not a fix for idiocy, is what I'm saying. – Lie Ryan Sep 18 '17 at 16:50
  • 53
    One of these days people will learn... there is no magic bullet for security. VMs, containers, microservices, monoliths, firewalls, air gaps, flux capacitors, doesn't matter. You need to continue to make smart, security conscious decisions no matter what you are working with. The greatest firewalls in the world can be undone with a single post-it note on a monitor, or a stupid help desk employee empathizing with a crying baby in the background. There is no sorcery that makes one technology more secure than another. So let's get back to being smart, security conscious people, eh? – corsiKa Sep 18 '17 at 18:54
  • 13
    @LieRyan: Lost of people in this industry aren't in their right mind – Petro Sep 18 '17 at 19:04
  • 5
    @corsiKa: They won't learn because they WANT to believe in it, and Security Companies will **ALWAYS** claim to offer it. – Petro Sep 18 '17 at 19:06
  • 3
    @Petro I suppose you're right - snake oil has always been a hot selling product, no matter the industry. You'd just think that professions full of, by necessity, incredibly smart people, would be able to see through it. – corsiKa Sep 18 '17 at 19:23
  • 1
    Does this differ on an non-linux OS, such as MacOS where technically Docker is running in a VirtualBox VM? Or am I mistaken on how this works? – Sandy Chapman Sep 18 '17 at 21:51
  • @L0j1k, do please enlighten us with an answer of your own. I would love to see it. (If you call other answers "shockingly dumb" in your answer, however, you should expect to be downvoted heavily. Just the facts, please.) – Wildcard Sep 19 '17 at 00:10
  • Re #2: Turns out, a fork bomb executed in a Docker container *can* and *will* force you to reboot the host OS. (I learned this the hard way.) – LegionMammal978 Sep 21 '17 at 10:53
  • @Petro Just to chime in that this often has a lot to do with objectives/repurposing/bean counting make-do philosophy. Fact is security often costs in edutime/performance and that equals money. The amount of small biz I used to go to in which I find some mgmt's Fonero or "MyCloud" devices shut down a whole firewall because IPD saw it sending data from his desktop to his phone or some other random IP and scared the onsite techs. Too much set and forget Docker is VM what MyCloud is to a cloud server. It's a lightweight solution to VM problems if you have existing training and failover/noncrit – user1901982 Sep 21 '17 at 15:07
  • 2
    This answer presents a view of VMs as being _intrinsically_ more secure than containerization, which seems wrong, While it's true in _general_, it is entirely possible that, on any given system, the software stack for containerization has less exploitable bugs in the code than the entire hardware and software stack for virtualization. Just depends on what versions of everything you're running, up to possibly including the exact factory and batch your hardware came from. – mtraceur Sep 21 '17 at 15:46
  • @LegionMammal978 You can set limits to prevent that. It's just not the default. – André Paramés Sep 21 '17 at 16:43
  • For a concrete example of (1) and (3), Qihoo 360 researchers [found a way to escape Docker using a kernel vuln and one escaping a QEMU VM using a QEMU vuln](https://conference.hitb.org/hitbsecconf2016ams/sessions/escape-from-the-docker-kvm-qemu-machine/). In that presentation they also outline the general steps and other ideas for exploitation. – Lekensteyn Sep 24 '17 at 21:07
  • 1
    @corsiKa We're talking theoretical points of security failure, ThoriumBR describes valid separations. Assuming exact same hardware/connectivity, firewall alone is a PITA for docker, running same services within a VM would cost you just 1 well-configured nftables instance. That distinction alone favors risk of VM over docker. It's like putting 10 children you need to look after in one house with only 2 open doors you can overlook all at once, versus putting the 10 children each in their own house with an often unknown number of open doors, some of which you had no idea were even there. – Julius May 10 '19 at 12:10
81

Saying either a VM or Docker is more secure than the other is a massive over simplification.

VM provides hardware virtualization; the hypervisor emulates hardware so that the guest kernel thinks it is running on its own machine. This type of virtualization is easier to isolate from one another. If your primary concern for virtualization is isolation (you don't really need the virtual machines to interact with each other), then VM is going to be significantly simpler to secure.

Docker provides kernel/operating system virtualization, and Docker uses the kernel namespaces to virtualize the kernel and the operating system so that the guest thinks it is running on its own instance of the operating system. Operating system virtualization provides significantly more flexibility on how you can secure the interconnection between your containers. If your primary concern for virtualization requires you to interconnect containers, then Docker provides the ability to define these sharing rules in ways that are impossible or too cumbersome with virtual machines.

With great flexibility comes great risks; it's easier to misconfigure docker than VM, and the flexibility of docker resource sharing also creates more opportunities for both implementation and configuration bugs. However, if you manage to configure the sharing permissions properly and assuming there are no implementation bugs in Docker or the kernel, then Docker provides much more fine-grained sharing than hardware virtualization and may give you overall better security.

For example, socket sharing. With Docker, you can create a shared named socket between two containers easily by sharing the socket file, and you can define security permissions (using any existing security modules: traditional Unix permission, capabilities, SELinux, AppArmor, seccomp, etc.) between the socket endpoints so that the docker/kernel enforces which and how applications can access the socket endpoints. With a virtual machine, you can share a socket a bit more cumbersome by setting up a chain of sockets to pipe data going to the socket via TCP. However, the hypervisor has very limited visibility and no way to control access to the socket endpoints, because permissions to these socket end points are applied by the guest kernels.

Another example is folder sharing. With containers, you can share a folder by setting up a shared mount, and since the Docker/kernel enforces file permissions that is used by containers, the guest system can't bypass that restrictions. With a VM, if you want to share folder you have to let one machine run a network file server or Samba server or FTP server, and the hypervisor have little visibility into the share and can't enforce sharing permissions. The additional moving parts here (the file server), may also have its own vulnerabilities and misconfiguration issues to consider.

TL;DR: Use VM for isolation and containers for controlled sharing.

Peter Mortensen
  • 877
  • 5
  • 10
Lie Ryan
  • 31,089
  • 6
  • 68
  • 93
  • 10
    Nice balanced answer. One thing I would add is that the one major advantage of standard containerization is that you only run one thing. You don't need to worry about things like whether ssh is running and whether it's configured right. – JimmyJames Sep 18 '17 at 17:32
  • 4
    Docker doesn't emulate or virtualize the kernel. Containers are implemented by the kernel itself. Docker is a tool for configuring and managing them. – André Paramés Sep 21 '17 at 16:48
  • @AndréParamés: Good catch, I've edited the answer to be more precise on how the actual containerization itself is done in the kernel. Do you think it's good enough? – Lie Ryan Sep 22 '17 at 03:28
  • 2
    It's well known that VMs provide better isolation having a smaller attack surface. Claiming that VMs are generally more secure is not on oversemplification. – Federico Apr 05 '18 at 13:08
24

As you correctly stated, Docker uses "Operating-system-level virtualization". You can think of this (if you are a *nix fan) as a fancy form of chroot.

By harnessing features and functionality built into the OS, Docker acts as a director for containers. The software's view of the OS is dictated by Docker.

The same kernel is used across all containers. So for instance, if I was able to cause a kernel panic in one container (think "Blue Screen of Death"), all other containers are affected.

Configuration seems to be much more critical than with hardware based solutions. Everything is in the same shared space effectively. Imagine putting a wild predator beside it's natural food source. If you didn't put a strong enough enclosure surrounding the predator, or forgot to shut the gate every time you left it's enclosure, you can likely imagine what would happen.

While certainly a lightweight solution, I certainly wouldn't run any unknown code along side that of trusted code.

Malicious code would have to determine a way to escalate it's privileges to a root/Administrator level in order to escape the container in any meaningful way.

In a virtual machine, the hypervisor would be attacked, not the kernel. This may prove more secure as there is a higher level of isolation between "containers" but introduces higher overhead for management.

From my understanding, there's nothing that makes Docker more secure than "bare metal" or hardware based solutions. I would be inclined to say that Docker is less secure. In terms of 1 container per a piece of software, that can prove a different story.

If you are unsure of real world examples, I would take a look at OpenVZ. It uses OS level virtualization in a similar style just as Docker does, but with a modified kernel.

dark_st3alth
  • 3,052
  • 8
  • 23
  • 1
    Also a docker container can access to the Host, just a bad container configuration can put your environment in a big trouble. – dlcardozo Sep 18 '17 at 00:56
  • 2
    I am a Linux fan and see `chroot` as something that provides cosmetic isolation but is so steeped in ancient Unix lore, with so many obscure gotchas, that it can not be trusted in any way at all. Is Docker really that bad? – trognanders Sep 18 '17 at 08:10
  • 2
    docker is based on lxc containers, a modern reinvention of tbe old unix chroot. so its up to you. – LvB Sep 18 '17 at 08:12
  • 1
    @BaileyS no, `chroot` is just a basic comparison, Docker is actually much more sophisticated than a simple chroot (which hs no resource limits, for example) See: [LXC](https://en.wikipedia.org/wiki/LXC) for the actual technology used – Josh Sep 18 '17 at 13:23
  • 4
    @Bailey S `chroot` is a very simplified way of looking at Docker. The actual functions Docker uses includes cgroups, namespaces, and OverlayFS. Recent versions make use of `libcontainer` with `libvirt`, and LXC. – dark_st3alth Sep 18 '17 at 16:38
  • 1
    Another issue, at least with Docker, is usability because running containers without becoming `root` is not possible. After a while (and many tutorials even recommend it) it is quite seductive to add my user account to the `docker` group, which is equivalent to working as `root` because I can now, without having to enter a password, run an Ubuntu image and mount `/` into the container! – MauganRa Sep 21 '17 at 20:31
14

I agree with ThoriumBR's answer if we're just comparing a blank VM with a blank Docker container. It should be noted, however, that properly configuring your system such as in Red Hat's Atomic Host mitigates many of those factors and even eliminates some.

Also, since Docker started, you can count on one hand the number of the sorts of vulnerabilities mentioned in his answer, all of which could be mitigated by further layers such as SELinux. We are also starting to see hypervisor-based OCI-compatible runtimes, that you can use instead of runc if you're really paranoid and willing to take a performance hit.

I will additionally point out that the vast majority of vulnerabilities in software are not in the kernel/driver space where VMs have the security advantage, but in the application layer, where Docker containers have the advantage, because they make it easier to create single-process attack surfaces. You can build a usable Docker container with a single statically-linked executable that runs as a non-root user, with limited resources and capabilities. You can't make a VM with an attack surface that small.

The bottom line is you have to look at the entire security picture. Docker containers can be very secure, but you have to look at the entire software supply chain, and make sure you configure docker and the host properly. VMs have their own set of strengths and weaknesses. You can't just compare the two "out of the box" and make a decision. You need a process in place to harden either solution.

Peter Mortensen
  • 877
  • 5
  • 10
Karl Bielefeldt
  • 423
  • 2
  • 8
13

Docker Containers are Not Inherently “More Secure” But the Ability to Quickly Spin Up—and Destroy—Duplicates in a Cluster Is Very Useful from a Security Standpoint.

Okay, lots of other answers here but people tend to forget that sometimes the best tool one can use to secure a web server/application is the ability to quickly redeploy clean code after already installed code has been compromised.

Nothing in the world 100% safe or secure. Especially in the world of exposed web applications. But Docker allows for better practices if people actually understand their value and engage in those practices.

  • Do regular backups of assets and databases.
  • Always have a solid, portable configuration.
  • Manage code in a code repository.
  • Use a deployment process that allows for the redeployment of code in a few keystrokes.
  • And use a system config tool to ensure systems/servers can be quickly be recreated with minimal effort.
  • If possible, deploy your code in some kind of load-balanced cluster so if one “thing” running code gets compromised, you can kill it off without completely bringing down the app.

Docker fits into the last point. So do VMs, but Docker even mores because once you have made the decision to use Docker, you are inherently using a tool whose mindset/mentality is: “I will not last forever. I need to be recreated.”

And in the cases of an infected Docker container, you can take it offline—as far as the outside world is concerned—to do some forensics to see what happened and see what can be done to redeploy code safely to other existing and new codebase installs inside of Docker containers.

VMs might be able to be used in such a way, but in my practice and experience only developers really think that way about VMs. Most systems administrators—and the teams they are a part of—see VMs as a way to more easily squeeze out more use and utility out of a bare metal server rather than see them as quickly disposable machines that can be recreated at whim.

With Docker you really have to go out of your way to make a Docker container a non-disposable monolith. Docker is an application developer’s tool built for an era where virtualization is quick, cheap and easy. And when deployed in some sort of load-balanced cluster, you get the added stability of little to no downtime.

Giacomo1968
  • 1,185
  • 5
  • 16
  • 4
    It's a sensible real-world answer. Anyone who has ever been compromised knows that there is an unnerving hour (or a day... or a week...) between the "aha, I know how to automatically detect we've been hacked" and the "aha, I devised a workaround that blocks this particular attack, so we can run the service while we analyze the root cause to find a vulnerability". – kubanczyk Sep 24 '17 at 21:17
  • I'd add that because Docker containers are lightweight, they're also more likely to be used in practice than VMs. And, of course, they're often used *on top of* VMs as an additional layer (if you're running Docker containers on AWS, for example). – al45tair Sep 25 '17 at 09:00
  • Huh? Thats really weird way to look at it.Yes, boss, all our data has been leaked/deleted/vhatever, but hey i can spin up a new docker with clean code. – Sandor Marton Sep 27 '17 at 08:40
  • 2
    @SandorMarton Security is not just protecting against data leaks. It’s also about preventing incursion, limiting damage and figuring out how to recover while keeping systems up and running. If data is lost it is usually due to bad application architecture and a failure to patch. In a case like that Docker, VMs or bare metal cannot save your ass. – Giacomo1968 Sep 27 '17 at 13:42
  • Yes, but you aren't preventing incursion by redeploying a new docker with clean code. I mean that clean code was compromised somehow. My problem with your answer, that most of the less experienced users will conclude that docker is best for security, since you can quickly redeploy clean code. Redeploying clean code doesn't resolve anything. – Sandor Marton Sep 28 '17 at 20:34
  • 1
    @SandorMarton Modified my answer to address your concern. But at the end of the day nothing is 100% secure. Period. It’s all about limiting damage on all levels. – Giacomo1968 Sep 29 '17 at 14:42
8

The question is too broad to be answered by a simple "yes" or "no".

There are very clear and open attack surfaces for Docker containers:

  • If the attacker is the one who can start containers (i.e., has access to the Docker API), then he immediately, without further action, has full root access to the host. This is well known for years, has been proven, and is not under debate by anyone (Google or SE will immediately give you simple command lines which not even need a particular container to work).
  • If the attacker manages to get root inside the container, then you're in trouble. In effect, he can then do pretty arbitrary kernel calls and try to affect the host kernel. Unfortunately, many docker images seem to run their stuff as root and skip the USER in the Dockerfile - this is not a Docker problem but a user problem.

I see one scenario which indeed would make a system based on Docker images safer (possibly...):

  • If you religiously make your images as small as possible, and only one per concern.
  • And all of them run unprivileged (i.e., without --privileged and not as root).
  • And networking is as tight as possible.

Then this would likely be more secure than a VM solution with the following parameters:

  • Installed base as small as possible.
  • Many concerns installed in the same VM.
  • Networking as tight as possible.

The reason being that if, e.g., a HTTP server is broken into, and it is running inside a minimal container (and I do mean "minimal" here - i.e., alpine with nothing except the bare minimum, including no outbound networking, no rw volumes etc.), then there is less chance that the attacker would be able to do anything than if it was a VM with other services running in it.

Obviously, this scenario assumes that the VMs are actually more fat than the containers. If you make the VMs the same as the containers, then it's a moot point. But then, a well designed Docker scenario would be designed like that. And a usual VM setup would maybe, at least over time, migrate to more and more stuff being installed.

AnoE
  • 2,370
  • 1
  • 8
  • 12
  • 4
    The second point is not always true anymore, thanks to user namespaces. You can map the root user inside the container to a regular user outside. – André Paramés Sep 21 '17 at 16:57
3

The main selling point of docker is not to be more secure, but to be easier. This includes:

  • Useful defaults, including some security options.
  • No work with configuring LXC, cgroups and so on yourself.
  • Ready made images thatcan be downloaded with one line.
  • Reproducible VMs. No more "works on my machine" arguments.

Docker is as secure as the techniques it is using, that are mostly LXC (linux namespaces), selinux and apparmor.

The common usage of docker is often horribly insecure. People are using one line to download an image made by somebody, they never even read the name of before running his operation system container. Even when you build the image yourself from a own baseimage (that can for example be build with debootstrap in the same manner as when you're building a chroot) and a Dockerfile, the Dockerfile often includes the curl $URL|bash anti-pattern to install software in the container.

Another thing is, that "the docker way" is not to upgrade images, but to rebuild them. This means stopping (people often assume you have a failover running with the new image), rebuilding, and starting again.

This comes from the way the snapshots are created, where a usual apt-get dist-upgrade introduces layers that are semantically noise from the docker point of view, where the history should look like "baseimage", "added apache", "added php", "installed roundcube" without "daily apt-get upgrade" between the steps.

If you're maintaining an own image-repository in your LAN, docker can be very useful, as you can re-deploy updated images quickly.

The snapshot feature is another thing which can be a good security improvement, when you can quickly roll back or fork an image to try something new in a test environment and then reset the container to a safe state.

The bottom line is, that docker is a very useful tool for developers, who want to test the deployment of their code reproducible and without changing their installed operating system, but there are better solutions for production. So you can run docker in production securely, but it does not make things more secure than a good LXC setup without docker.

As far as I know they are not restricted to LXC as their VM or will not be restricted anymore soon, especially when they target windows as well. Choosing another backend has security implications that are the same as, i.e., LXC vs. KVM vs. VirtualBox. The other points will probably keep the same.

allo
  • 3,173
  • 11
  • 24
3

There are already some great answers, but to have a full illustration of the difference between docker and VM I will add a picture:

enter image description here

Source

From that perspective it is easier to understand why is VM more secure than Docker container.

Mirsad
  • 10,005
  • 8
  • 33
  • 53
1

First I would like to point out that the more you re-use/share the same resources harder is to keep it safe.

Said that, Docker try to run each process on a sandbox (Container) and this is the only reason why docker may be considered more secure. From the multiple process your server runs, you will theoretically have access to your own process and expose a shared folder or a socket to the other process. Configuration files, credentials, secrets other debugging and maintenance ports/sockets will be unaccessible even from the same "machine".

Docker it feels more secure as the way has been designed to work: it sandbox each running process. Where Virtual Machines and Baremetal you sandbox a group of process and applications and between them it is your responsibility to setup permissions accordingly.

albert
  • 111
  • 2