34

I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container.

I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else.

Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

dflemstr
  • 533
  • 1
  • 5
  • 7
  • Why would it be wrong to bind mount these somewhere else? – Michael Hampton Aug 18 '14 at 16:17
  • 2
    Because the storage location is an implementation detail. The day docker adds another storage driver, the location will move. I need to make this semi-automatic and it would be nice to use public APIs for that reason. – dflemstr Aug 18 '14 at 17:22
  • 2
    It might be worth considering working over nsenter (or docker-enter) to achieve your goals; there is of course the constraint of having to rnu the inspection code/tools inside the container. – VladFr Sep 07 '14 at 08:53
  • Is there no way of instructing Linux to mount across a container border? – dflemstr Sep 07 '14 at 21:45
  • @dflemstr yes, there is, --volumes-from kinda does that, it appears to mount a union of the directory from the other container's base image and the volume, but this behaviour is not documented afaik – Tarnay Kálmán Sep 29 '14 at 03:21
  • Check out the answer by mnieber, it is the way to go if the different pieces are all acceptable. – Oliver Sep 30 '21 at 09:05

6 Answers6

17

Take a look at docker export.

To quickly list the files in your container:

docker export CONTAINER|tar -t

To export:

docker export CONTAINER>snapshot.tar
docker export CONTAINER|tar x PATH-IN-CONTAINER

Or to look at a file:

docker export CONTAINER|tar x --to-stdout PATH-IN-CONTAINER
# e.g. 
docker export consul|tar x --to-stdout etc/profile

Docker 1.8 supports cp:

https://docs.docker.com/reference/commandline/cp/

Usage:  docker cp [options] CONTAINER:PATH LOCALPATH|-
        docker cp [options] LOCALPATH|- CONTAINER:PATH

update: you should ssh to your docker machine when you run this.

laktak
  • 626
  • 2
  • 9
  • 16
  • 2
    My images are fairly big (many hundred MiB) so doing this to fetch individual files is too much overhead. It will create the multi-hundred-megabyte file every time. – dflemstr Aug 03 '15 at 14:46
  • @dflemstr use the line with `tar x PATH-IN-CONTAINER`, it will only extract the files you need. – laktak Aug 03 '15 at 14:58
  • ...but the entire `tar` archive is still created in the Docker daemon, and it takes multiple minutes to create... – dflemstr Aug 03 '15 at 14:58
  • @dflemstr not sure what your setup is but e.g. `docker export ubuntu|tar -t|grep etc/network` takes 3 seconds for me. – laktak Aug 03 '15 at 15:03
  • You are probably running that on the same machine as the Docker daemon so you don't need to do a network transfer, and the `ubuntu` image is really small... – dflemstr Aug 03 '15 at 15:04
  • Yes, you should run this command locally. Jenkins (888MB) takes 7.7sec but YMMV. – laktak Aug 03 '15 at 15:10
3

You can use docker commit to persist the current state of your container in a new image, and start an interactive container from this image to inspect the contents.

From the documentation :

It can be useful to commit a container’s file changes or settings into a new image. This allows you debug a container by running an interactive shell, or to export a working dataset to another server.

Hope this helps.

Eric Citaire
  • 181
  • 1
  • 9
2

Podman can run and work with Docker images. You could use it to mount a running or stopped container:

prompt:~ # mnt=`podman mount 26e8b85f7a5c`
prompt:~ # ls $mnt
bin  boot  dev  etc  home  lib  ...  tmp  usr  var

where 26e8b85f7a5c is the ID of the container to be mounted.

aventurin
  • 211
  • 1
  • 2
  • 7
2

You can use nsenter to run your inspection program (that probably must be included in the container already) inside a container/namespace. But to mount the container filesystem as is seen inside it you must mount the original image and all the layers if is aufs, or the equivalent action for device mapper, btrfs and the other (future) storage engines used, different in each case. Probably would be more efficient to let docker do the work for you, exactly as is supposed to do, and use nsenter to do the inspection inside the container.

There are other approachs. docker diff will shows what files changed in that container, if you want to see what changed instead of what was in the original image.

And for data that must be persistent and inspectionable, probably a better pattern would be to have it in a volume in the container, and have it either mounted on the real filesystem, or in a pure data container, or in the same container, but that you can launch another container with the inspection program mounting those volumes from it.

gmuslera
  • 181
  • 3
2

EDIT: I tried the solution below and unfortunately it did not work well for me in practice. The mounted filesystem did not accurately reflect the container's filesystem (even with cache=no). I'm not sure if this is a fundamental problem or me doing something wrong.

You can install sshd in the docker image and use docker exec to run an ssh service (/usr/sbin/sshd -D) on the docker container (note that the SSH port 22 of the docker container needs to be exposed).

Then, use docker cp to copy your public ssh key to the /root/.ssh/authorized_keys directory of the docker container.

Finally, use docker inspect to find the container's IP address and mount the container's filesystem using

sudo sshfs -o allow_other,default_permissions,IdentityFile=/path/to/identityfile  root@xxx.xx.x.x:/ /mnt/my_container

You'd have to write a script to make this work comfortably in practice.

mnieber
  • 121
  • 3
  • This looks very promising, should work but I will have to try it. If it does work, this should be marked the answer, based on the OP. – Oliver Sep 30 '21 at 09:02
  • @Oliver I also found this tool that allows you to ssh into any running container, without the need to install ssdh on it: https://github.com/jeroenpeeters/docker-ssh. It's quite old by now, and I don't know if it still works (I guess it does), it worked great. – mnieber Oct 01 '21 at 11:27
0

This is an ancient question, but if someone is still looking for a way to mount (and inspect) a docker filesystem from an arbitrary host, there are 2 projects allowing exactly that:

dguerri
  • 101
  • 2