tl;dr: container solutions do not and never will do guarantee to provide complete isolation, use virtualization instead if you require this.
Bottom up and top down approaches
Docker (and the same applies to similar container solutions) does not guarantee complete isolation and should not be confused with virtualization. Isolation of containers is achieved through adding some barriers in-between them, but they still use shared resources as the kernel. Virtualization on the other hand has much smaller shared resources, which are easier to understand and well-tested by now, often enriched by hardware features to restrict access. Docker itself describes this in their Docker security article as
One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.
Consider virtualization as a top-down approach
For virtualization, you start with pretty much complete isolation and provide some well-guarded, well-described interfaces; this means you can be rather sure breaking out of a virtual machine is hard. The kernel is not shared, if you have some kernel exploit allowing you to escape from user restrictions, the hypervisor is till in-between you and other virtual machines.
This does not imply perfect isolation. Again and again, hypervisor issues are found, but most of them are very complicated attacks with limited scope that are hard to perform (but there are also very critical, "easy to exploit" ones.
Containers on the other hand are bottom-up
With containers, you start from running applications on the same kernel, but add up barriers (kernel namespaces, cgroups, ...) to better isolate them. While this provides some advantages as lower overhead, it is much more difficult to "be sure" not having forgotten anything, the Linux Kernel is a very large and complex piece of software. And the kernel itself is still shared, if there is an exploit in the kernel, chances are high you can escape to the host (and/or other containers).
Users inside and outside containers
Especially pre-Docker 1.9 which should get user namespaces pretty much means "container root also has host root privileges" as soon as another missing barrier in the Docker machine (or kernel exploit) is found. There have been such issues before, you should expect more to come and Docker recommends that you
take care of running your processes inside the containers as non-privileged users (i.e., non-root).
If you're interested in more details, estep posted a good article on http://integratedcode.us explaining user namespaces.
Restricting root access (for example, by enforcing a non-privileged user when creating the image or at least using the new user namespaces) is a necessary and basic security measure for providing isolation, and might give satisfying isolation in-between containers. Using restricted users and user namespaces, escaping to the host gets much harder, but still you shouldn't be sure there is just another way not considered yet to break out of a container (and if this includes exploiting an unpatched security issue in the kernel), and shouldn't be used to run untrusted code.