How to make a symlinked folder appear as a normal folder

42

20

I have two Dart applications I need to dockerize. These two apps use a shared source directory.
Because Docker prevents adding files from folders outside the context directory (project/app1) I can't add files from ../shared nor from shared (the symlink inside projects/app1).

I'm looking for a way to trick Docker to do it anyway.

My simplified project structure

- projects
  - app1
   - Dockerfile
   - shared (symlink ../shared)
   - otherSource
  - app2
   - Dockerfile
   - shared (symlink ../shared)
   - otherSource
  - shared
    - source

I could move Dockerfile one level up and run docker build from there but then I need two Dockerfiles (for app1 and app2) in the same directory.

My current idea was, if I could somehow hide the fact that projects/app1/shared is a symlink this problem would be solved. I checked if I can share projects using Samba and remount it somewhere else and configure Samba to treat symlinks like normal folders but haven't found whether this is supported (I have not much experience with Samba and not tried it yet, just searched a bit).

Is there any other tool or trick that would allow that?

I would rather not change the directory structure because this would cause other troubles and also rather not copy files around.

zoechi

Posted 2014-11-20T09:54:47.503

Reputation: 532

Answers

36

I don't have much experience with docker so I can't promise this will work but one choice would be to mount the directory instead of linking to it:

$ cd projects/app1
$ mkdir shared
$ sudo mount -o bind ../shared shared/

That will attach ../shared to ./shared and should be completely transparent to the system. As explained in man mount:

The bind mounts.

Since Linux 2.4.0 it is possible to remount part of the file hierarchy somewhere else. The call is:

mount --bind olddir newdir

or by using this fstab entry:

/olddir /newdir none bind

After this call the same contents are accessible in two places.

terdon

Posted 2014-11-20T09:54:47.503

Reputation: 45 216

1@zoechi this is perfectly on topic on both sites. As a general rule, I would post more technical questions like this on U&L and more user-space questions here. The choice is completely up to you though. On the one hand, there are more users here so more eyeballs, on the other, there is a much higher concentration of professional *nix people on U&L. Just make sure you don't post the same question on both sites. If you want to move it, either delete this or flag for mod attention and ask them to migrate. – terdon – 2014-11-20T13:05:07.320

2It was mandatory for me to restart docker daemon! Else the mounted dir was not visible in the container. – dim – 2016-01-22T11:25:25.407

@dim yes! I tried to get it to work with Capistrano and it didn't work – turns out I mounted the shared directories after I started the container – csch – 2016-11-07T14:18:15.613

Unfortunately this won't work for Windows or OS X users. The debate about this issue has been... lively. – Jason – 2017-02-24T01:34:58.973

is mounting 'commited' to source control eg github? or would I have to do it every time? – pie6k – 2019-03-14T11:22:23.857

@pie6k how could it be committed? Source control tracks changes in text files, not commands run on the system. – terdon – 2019-03-14T11:25:49.213

25

This issue has come up repeatedly in the Docker community. It basically violates the requirement that a Dockerfile be repeatable if you run it or I run it. So I wouldn't expect this ability, as described in this ticket: Dockerfile ADD command does not follow symlinks on host #1676.

So you have to conceive of a different approach. If you look at this issue: ADD to support symlinks in the argument #6094, a friend of ours from U&L (@Patrick aka. phemmer) provides a clever workaround.

$ tar -czh . | docker build -

This tells tar to dereference the symbolic links from the current directory, and then pipe them all to the docker build - command.

excerpt from tar man page
-c, --create
       create a new archive

-h, --dereference
       follow symlinks; archive and dump the files they point to

-z, --gzip, --gunzip --ungzip

slm

Posted 2014-11-20T09:54:47.503

Reputation: 7 449

3This is an EXCELLENT solution! I understand why Docker claims they want to omit this feature. However, there is a considerable difference in the workflow I use while developing my containerize project and how I expect it to be built for production. On my local machine I want a super tight feedback loop. My app has 1 git repo and the build environment for the containers has a 2nd repo. I need to be able to make edits and build tests locally before I can decide if I want to commit and push. I won't have symlinks or ADD instructions in my final project. – Bruno Bronosky – 2015-03-16T18:44:29.613

7

Dockerfiles are not repeatable. Dockerfiles can not possibly be made repeatable, because they almost all have apt-get or something equivalent at the 2nd or 3rd layer and apt-get is not repeatable. Tying the Docker development strategy to a misguided attempt to make the impossible true will just saddle Docker with a series of bad abstractions that help nobody. https://nathanleclaire.com/blog/2014/09/29/the-dockerfile-is-not-the-source-of-truth-for-your-image/

– Jason – 2017-02-24T01:12:08.387

For the newbs, can you please explain in English what is going on here, linking to the tar man page is nice but – Alexander Mills – 2017-07-14T21:02:49.630

1Ok, so I don't really see why this is any better than a cp command, can you explain why it's better? I also think the pipe is confusing/overly convoluted. Why not just put the tar command above the build command. I guess because then you would overwrite the symlinked dir with the real dir. – Alexander Mills – 2017-07-14T21:13:01.590

@AlexanderMills - you don't want to copy the links in, you need the actual files they're linking, to hence the way I showed. Think about this bit: where are the links going to point to inside a docker container that doesn't have the actual files the links are pointing to?" – slm – 2017-07-15T00:14:45.117

no I get that part - to repeat myself (a) I don't see why this is better than a cp command, and part (b) I already think I answered myself - you need the pipe otherwise you will overwrite the symlink dir with the actual dir data. In any case, I think a better solution than this is to use either mount (to mount the parent dir to a local dir) or to copy a temp Dockerfile to the parent dir, and then delete the temp Dockerfile when you're done. – Alexander Mills – 2017-07-15T00:30:52.373

1

@AlexanderMills - best way to see what happens is to try it and see the difference. Also the above is a building of a container not a running, so there's no mounting. https://stackoverflow.com/questions/37328370/is-it-possible-to-mount-a-volume-on-the-host-during-build-phase-of-a-docker-imag. I highly suggest you try all these things out, it'll make much more sense.

– slm – 2017-07-15T02:31:39.047

1I just added /bin/cp ../requirements.txt . && docker build ... to a Makefile for building the Docker, it was easier – user5359531 – 2018-01-10T20:31:49.650