0

I am trying to integrate docker containers in my simple CI pipeline for deploying a webapp.

I have 3 containers with nginx, tomcat and mysql.

I understood the basics of how to create these containers using Dockerfiles and linking them together.

My artifacts are uploaded to a Nexus server and I see different ways for deploying an artifact on docker containers.

One would be to rebuild the tomcat container pulling the newly generated artifact and copying it in the container with ADD in the Dockerfile.

Another way would be to start the unchanged Tomcat container mounting a volume on the host containing the new artifact fetched from nexus.

I do not understand what would be the correct approach from a 'docker point of view'. I can see a tradeoff between reusability of a container with some configuration against a one fixed container that I can just ship and run without an extra processes of providing webapp folder for deployment etc.

It's a general confusion I have. Same for nginx container. Should specific changes in the config files lead to a rebuild of the container or should I just mount some file on the host machine when I start the container?

Thanks a lot

spike07
  • 101

2 Answers2

1

Part of the win with Docker is to streamline and automate parts of the development process. I think the best way to accomplish this is to rebuild the container. Manage your simple configs and small files in a repository copy the artifacts into the Docker container as part of the Dockerfile process. This way whenever you have to make code updates you just roll a new Docker container. Sharing a mounted volume will also work, but I would say you should avoid that if you can. The reason I say that is because mounted volumes are harder to use with SELinux. SELinux can now automatically generate MCS labels for each Docker container on launch, which basically constrains it to its directory. This greatly reduces the risk to your host, as by default containers don't contain in any meaningful way.

  • 1
    I'd like to re-iterate the "working" aspect from http://jonathan.bergknoff.com/journal/building-good-docker-images. My vote is to `ADD` the artifact to your image that you want to run. The container should be able to be run anywhere without some outside dependency on code or artifact. Additionally, You might rebuild this image and tag it with the build number or release of the artifact. – Andy Shinn Nov 06 '14 at 00:47
0

It does not matter so much if you are using a fixed container which has access to the latest code through a volume or if you build a new Docker image with the latest code embedded.

Only in the first case you build one time and run many times, while in the second case your would have to build each time and run each time.

If you are deploying to production with Docker, then I would try to have my CI pipeline produce docker images as they would be used in production.

Thomasleveil
  • 441
  • 5
  • 13