Suppose I'm building a microservice that resizes and optimizes image files. It's a long-running daemon which provides an HTTP API responding to standard GET
/POST
responses. Based on my knowledge of Docker, this is a perfect use case for containerization -- one process (the HTTP server daemon) running indefinitely in a dedicated container.
Where things get murky for me is if we look one level deeper into a possible way this service is built. Let's say that, for every request that comes in, the service spawns an ImageMagick process to resize the image, and then another pngquant or similar process to reduce the file size. Since these programs will be handling potentially untrustworthy user-provided image data, it's important to be able to update each component to the latest available version as soon as these new versions are released. But how do I split these components up in terms of Docker images and containers?
I've come up with a few different approaches, but it still seems like I'm missing something:
1. One big container. When building the HTTP API container, install/compile the ImageMagick/pngquant utilities at the same time. As far as the API daemon knows, it's just like running on any other computer. To update one of the binaries, rebuild the entire container (even if the API daemon itself hasn't changed). If it's necessary to test/develop against ImageMagick independently, it may be awkward because that's not the focus of this container's layout.
2. Containers running containers. The HTTP API is its own container, ImageMagick is its own container, pngquant is its own container, and so on. When the API is handling a request that requires invocation of one of these utilities, the API code starts a container to convert the one image file and that container is destroyed once the conversion is done. As I understand it, the HTTP API code would need some pretty lofty permissions to be able to create a new container; might not be a reasonable approach from a security standpoint.
3. Wrappers and glue. Wrap ImageMagick and pngquant in custom long-running daemon code so these containers never have to exit. Have the HTTP API container communicate with the others over the Docker network as required. Seems like a lot of pointless indirection and complexity for no real benefit.
4. Something about image composition that I'm missing. It doesn't look like there's a clean way to "piecewise" cobble together a container from multiple, independently-replaceable images. It would be interesting if there was a way to combine multiple images, each containing one of ImageMagick, pngquant, and the HTTP API, into a single container. Based on what I've seen, replacing/modifying an image also changes all the images that were built on top of it, making this approach not all that different from #1.
What I'm really looking for, above all else, is to be able to develop/build/test/deploy components of a container's software stack independently without rebuilding or reinstalling the parts that didn't change. If this conflicts with the Docker design philosophy too strongly, I'd be willing to either change my view of the approach or look for different tools.