0

we're looking for a server architecture that will allow for converting 1000 large images in 5 seconds. As a test, we ran some benchmarks using a 16 core server, using GNU Parallel to run 1,000 image conversions.

ls -1 *.pdf | parallel --eta convert {} {.}.png

Each image takes around 1.0 seconds to convert, and with 16 cores running at 100% (monitored via htop), we were able to render all 1000 images in about 60 seconds.

We'd like to, someday (as budget allows), get this down to 5 seconds. We obviously need more servers working in a distributed environment-- we just don't know where to start.

What sort of server architecture, applications, tools, services, technologies, etc. would you suggest we look into?

user1661677
  • 111
  • 1
  • 2
    Consider a queuing system with many workers pulling jobs off and performing them as quickly as possible. In AWS you'd use SQS and probably spot instances, depending on the performance and reliability tradeoff. – Tim Feb 26 '17 at 04:06
  • Can you try this: ls | parallel --block 1k -j1 -Sserver{1..100} --pipe --roundrobin parallel convert {} {.}.png – Ole Tange Feb 26 '17 at 17:53

0 Answers0