Apache has a theory of 'Maximum Clients'
That is the number of simultaneous connections it can handle. I.E. if an apache server has a 'max clients' limit of 100, and each request takes 1 second to complete, it can handle a maximum of 100 requests per second.
An application like SlowLoris will flood a server with connections, in our example if SlowLoris sends 200 connections per second, and Apache can only handle 100 connections per second the connection queue will keep getting bigger and use up all the memory on the machine bringing it to a hault. This is similar to the way Anonymous' LOIC works.
NGINX and Lighttpd (Among others) don't have a maximum connections, they use worker threads instead so, theoretically, there's no limit to the number of connections they can handle.
If you monitor your Apache connections, you'll see that the majority of the active connections are 'Sending' or 'Receiving' data from the client. In NGINX/Lighttpd they just ignore these requests and let them run on in the background, not using up system resources, and it only has to process connections with something going on (Parsing responses, reading data from backend servers etc.)
I actually answered a similar question this afternoon, so the information in there might also be interesting to you Reducing Apache request queuing