we're looking for a server architecture that will allow for converting 1000 large images in 5 seconds. As a test, we ran some benchmarks using a 16 core server, using GNU Parallel to run 1,000 image conversions.
ls -1 *.pdf | parallel --eta convert {} {.}.png
Each image takes around 1.0 seconds to convert, and with 16 cores running at 100% (monitored via htop), we were able to render all 1000 images in about 60 seconds.
We'd like to, someday (as budget allows), get this down to 5 seconds. We obviously need more servers working in a distributed environment-- we just don't know where to start.
What sort of server architecture, applications, tools, services, technologies, etc. would you suggest we look into?