7

I am using NIO/APR Connector for Tomcat7.

E.g.

<Connector port="8080" 
        protocol="org.apache.coyote.http11.Http11AprProtocol" 
        connectionTimeout="3000"
        redirectPort="8443" 
        URIEncoding="UTF-8" 
        maxPostSize="0"
        maxThreads="200"
        enableLookups="false"
        disableUploadTimeout="false"
        maxKeepAliveRequests="-1"
        useBodyEncodingForURI="true"
        compression="on"
        compressableMimeType="text/html,text/xml,text/javascript,text/css,text/plain"   
        />
  1. How can I determine the optimal size of the maxThreads of my NIO/APR Connector for Tomcat?

  2. What is a good starting value for maxThreads?

confile
  • 183
  • 1
  • 2
  • 8

2 Answers2

8

The OOTB configuration is typically 150 – 200 total accept threads for each connector. This default is intended for medium load / complexity applications on “average” hardware.

As a general rule of thumb, a lightweight, high performance application should look at using a maximum of 150 (accept) threads per CPU core (so a total of 600 on a 4 core box). A more conservative setting, for more heavyweight applications would be 300 accept threads. I’d expect most requirements to be somewhere around the middle (but this will need some analysis), but this is highly situational - see @zagrimsan's answer.

Obviously HTTPS has a slightly higher overhead, so the standard practice is to reduce the number of accept threads accordingly.

Using the APR / native connector can improve throughput, but the limiting factor is usually the application profile, so again, no magic numbers.

The danger of using a thread setting that is too high is that the server can become “terminally busy” – where so much time is spent managing threads and application demands, everything else suffers (GC being notably one symptom). It seems counter-intuitive, but generally less is more.

A busy server with the thread count correctly configured will degrade gracefully under heavy load. Too high and it’ll fall over!

Now, there are a number of related settings (accept count, min threads, waiting etc.) that will also need adjusting to suit, but that's beyond the scope of this answer.

Michael
  • 325
  • 5
  • 13
-2

The answer depends on the load you're expecting to serve.

Quoting the accepted answer from a thread from StackOverflow which, even though it talks about a setup with Tomcat and Apache, applies here as well:

You should consider the workload the servers might get.

The most important factor might be the number of simultaneously connected clients at peak times. Try to determine it and tune your settings in a way where:

  • there are enough processing threads [...] that they don't need to spawn new threads when the server is heavily loaded
  • there are not way more processing threads in the servers than needed as they would waste resources.

[...]

For example consider an application where you have ~300 new requests/second. Each request requires on average 2.5 seconds to serve. It means that at any given time you have ~750 requests that need to be handled simultaneously. In this situation you probably want to tune your servers so that they have ~750 processing threads at startup and you might want to add something like ~1000 processing threads at maximum to handle extremely high loads.

zagrimsan
  • 317
  • 3
  • 13