So I'm working on a website uptime monitor that should check thousands of websites per minute by doing a simple http call to them and checking the received status code.
I've tested it using multiple processes of a node.js implementation to ensure that the bottleneck is not a programming or processing one.
Anyways I've hit a wall. The maximum number of sites I'm able to check is roughly 2000 per minute. This number doesn't change if I run 1 instance or 10 instances of the code.
If I run multiple processes, the throughput of each process reduces so that the total output is still 2000 sites checked per minute.
The actualy network bandwidth doesn't seem to be too high. I mean 2000 websites would mean roughly 1 mb of data (and that's being generous). So it's probably an issue somewhere else and i'm trying to pinpoint that.
Tested on digital ocean.
Thanks in advance.