It would depend on what other features you need, so for simplicity I'm assuming nothing else is needed other than serving PHP scripts and static files. I doubt the app that you describe is going to be CPU-bound even on an Atom so I would guess that RAM is going to be your key cause of bottlenecks.
If those requests really need to be concurrent rather than some queuing being fine then you are going to need enough RAM to have at least 100 PHP instances and relevant web server workers operating at once. You might be able to reduce this my using Apache with a thread-based MPM if your PHP is compile completely thread-safe (the libcurl module is threadsafe, you'll need to check any others that you use but I believe most are thread-safe these days) as I believe this will allow more efficient sharing of code pages by apache/mod_php/php. If thread-safe running is not possible then you might want to consider something lighter-weight than Apache then you can at least reduce that part of the footprint, so lighttp or nginx running PHP via fCGI (which tends to be more faf to setup than Apache+mod_php as most distros will basically do apache+mod_php for you out of the box save a little tweaking, but can be significantly more RAM efficient then Apache).
If using Apache you can reduce the chance of workers with PHP loaded being "wasted" serving static files (meaning you need to allow for more workers than requests for your PHP code actually need) by putting nginx (or lighttp) in front of Apache as a reverse proxy - the low-RAM event-driven server handling all static requests and just sending requests requiring PHP to Apache.
How complex is your application, and how locked are you to PHP? The description in your question (many workers most of which are just sat waiting for something to happen) lends itself to suggesting a completely event driven solution rather than a process/thread based one, but this will mean moving away from PHP. There are many event-based web architectures around at the moment, some of which are apparently pretty stable. To name but one, node.js is one of the popular flavours of the day right now. With a fully event driven arrangement each concurrent request will use very little extra RAM each. You could mix-and-match technologies - use a RAM efficient event driven web server to handle static files and proxy for both node.js instances (running your code that'll spend all its time waiting for requests made to the outside world to return) and Apache instances (running your remaining PHP code), though this mix-and-match may not be attractive to you depending on your level of technical knowledge and confidence as it will be more complex to setup than just apahe+mod_php.
Edit
With your new description of what the PHP script is doing, you should be able to use php's curl module's curl_multi_*
functions to do asynchronous HTTP requests. This means all your checks can be done by one PHP process meaning the memory use problem is moot and switching web servers will make little or no difference. See the PHP manual for a reference to these functions and examples like this one for more tutorial-like examples if the reference material doesn't make the overall process clear.
If this is an automated process and you don't need an HTTP response from it (i.e. you are just recording the responses in a database for further analysis) you could even run the php script directly from a cron job, rather than needing a web server at all.