I'm trying wrap my head around Docker to architect a simple swarm that will eventually be deployed to AWS EC2 Container service.
My task is to take different kinds of jobs in an SQS queue and process them based on some JSON with { type = "<TYPE_NAME>"}
.
My initial thoughts on this are as follows:
Each type of job will get its own container
This is useful because some of my jobs are python scripts, others require C++
-compiled programs, still others require specialized environments. The rest are just file operations.
One container to rule them all
One container will control the rest, reading from the SQS queue, determining what kind of job it is and
Question time
I've successfully gotten all these individual containers basically built. Now I'm trying to figure out how to get them to talk to each other.
How should I think about passing a job off from the master container to the children? Should it be via API call? Do I need a listener service on each container attached to a port waiting for a signal or can I just execute code directly from the master instance with a shared file system?
It's all very new.....