The OOTB configuration is typically 150 – 200 total accept threads for each connector. This default is intended for medium load / complexity applications on “average” hardware.
As a general rule of thumb, a lightweight, high performance application should look at using a maximum of 150 (accept) threads per CPU core (so a total of 600 on a 4 core box). A more conservative setting, for more heavyweight applications would be 300 accept threads. I’d expect most requirements to be somewhere around the middle (but this will need some analysis), but this is highly situational - see @zagrimsan's answer.
Obviously HTTPS has a slightly higher overhead, so the standard practice is to reduce the number of accept threads accordingly.
Using the APR / native connector can improve throughput, but the limiting factor is usually the application profile, so again, no magic numbers.
The danger of using a thread setting that is too high is that the server can become “terminally busy” – where so much time is spent managing threads and application demands, everything else suffers (GC being notably one symptom). It seems counter-intuitive, but generally less is more.
A busy server with the thread count correctly configured will degrade gracefully under heavy load. Too high and it’ll fall over!
Now, there are a number of related settings (accept count, min threads, waiting etc.) that will also need adjusting to suit, but that's beyond the scope of this answer.