0

Suppose I'm building a microservice that resizes and optimizes image files. It's a long-running daemon which provides an HTTP API responding to standard GET/POST responses. Based on my knowledge of Docker, this is a perfect use case for containerization -- one process (the HTTP server daemon) running indefinitely in a dedicated container.

Where things get murky for me is if we look one level deeper into a possible way this service is built. Let's say that, for every request that comes in, the service spawns an ImageMagick process to resize the image, and then another pngquant or similar process to reduce the file size. Since these programs will be handling potentially untrustworthy user-provided image data, it's important to be able to update each component to the latest available version as soon as these new versions are released. But how do I split these components up in terms of Docker images and containers?

I've come up with a few different approaches, but it still seems like I'm missing something:

1. One big container. When building the HTTP API container, install/compile the ImageMagick/pngquant utilities at the same time. As far as the API daemon knows, it's just like running on any other computer. To update one of the binaries, rebuild the entire container (even if the API daemon itself hasn't changed). If it's necessary to test/develop against ImageMagick independently, it may be awkward because that's not the focus of this container's layout.

2. Containers running containers. The HTTP API is its own container, ImageMagick is its own container, pngquant is its own container, and so on. When the API is handling a request that requires invocation of one of these utilities, the API code starts a container to convert the one image file and that container is destroyed once the conversion is done. As I understand it, the HTTP API code would need some pretty lofty permissions to be able to create a new container; might not be a reasonable approach from a security standpoint.

3. Wrappers and glue. Wrap ImageMagick and pngquant in custom long-running daemon code so these containers never have to exit. Have the HTTP API container communicate with the others over the Docker network as required. Seems like a lot of pointless indirection and complexity for no real benefit.

4. Something about image composition that I'm missing. It doesn't look like there's a clean way to "piecewise" cobble together a container from multiple, independently-replaceable images. It would be interesting if there was a way to combine multiple images, each containing one of ImageMagick, pngquant, and the HTTP API, into a single container. Based on what I've seen, replacing/modifying an image also changes all the images that were built on top of it, making this approach not all that different from #1.

What I'm really looking for, above all else, is to be able to develop/build/test/deploy components of a container's software stack independently without rebuilding or reinstalling the parts that didn't change. If this conflicts with the Docker design philosophy too strongly, I'd be willing to either change my view of the approach or look for different tools.

smitelli
  • 1,214
  • 1
  • 10
  • 16

1 Answers1

1

Simply design this with multiple "independent" containers, some of which depend on others through a remote API.

There's no need to have "containers run containers". You simply have a container taking requests to process with ImageMagick, and that is simply waiting to process requests continuously. If you need to upgrade that container separately, then do it. This continuously-running process is similar to your "wrappers and glue" point.

Note that you could structure this where it simply runs ImageMagick as a batch process. That might be all you need. However, structuring ImageMagick in a container gives you the ability to connect different instances of ImageMagick containers, running different versions of ImageMagick, for simultaneous comparison and testing.

Concerning assembling an image from pieces of other images, yes, there isn't a direct way to do that, but there are examples on DockerHub of images being composed of perhaps two different frameworks in the base image (like tomcat & JDK), and integrating the additional changes to install and configure another particular framework in the image simply requires taking an excerpt from the likely public Dockerfile that shows how to do this.

David M. Karr
  • 121
  • 1
  • 5