0

I am currently building an API that sends transactional emails to users. I do this with a job queue, bull in particular. During development, something crossed my mind; where are these jobs actually executed? Currently, I send these jobs through bull to a redis database running in a docker container on my computer. My first thought was that these jobs run in the redis container, but I suppose that's not true, because that redis container isn't running NodeJS, which is what I use for my API.

I suppose that these jobs are actually executed on the same machine that created them. But, if that's the case, then what's so good about having a job queue? I thought the point of having a job queue was to delegate the task to something else so that the API isn't slowed down by sending all of these emails. To my understanding, all that's happening now, is just delaying the task by sending and receiving the job to and from redis.

I am quite new with job queues. I hope I've described my situation clear enough.

Thank you.

1 Answers1

0

Job queues are for recording work items.

Working the queue may optionally be done by threads separate from the rest of the application. Which is useful for async processing in the background, taking advantage of multiple CPUs.

Also useful is durability. If emails queued to go out still need to happen after the application restarts, storage becomes necessary. Which is actually not trivial to make robust.

A queue may still be useful even if processed from the application server. Depends on the features required.


This implementation uses a remote database for storage. So feel free to write a worker service (using bull) and run it in a separate container.

  • Main application container adds a work item and moves on.
  • Queue picks it up, Redis providing storage.
  • Worker container processes it and does the thing. Increase concurrency to have it called several times in parallel.
John Mahowald
  • 30,009
  • 1
  • 17
  • 32