0

We have a certain application running in IIS 8.5 implemented as a "web garden", with say, Max Worker Processes = 10. The reason for this is long-running requests, DB and network/async intensive, to avoid filling up the queue of the worker process, in a very busy application.

The AppPool is for a NetCore application (no managed code), serving a RESTful API, pipeline mode = Integrated.

What we are trying to figure out is how IIS determines when to trigger the additional worker processes to be created. We could not find any documentation or article/post around that explains what triggers the "need" for the next worker process, until the max value configured is reached.

We can't really understand how IIS determines when to scale up the worker processes, in an effort to better tune the Max value for it. From tests and observations so far, it seems to be oddly random, although there is some indication that "new requests" will sooner or later cause a new worker process (even with very low request volumes, say one request every 5 minutes)

From the different tests, we noticed that:

  • Upon starting up the AppPool, only one worker process is created
  • Hitting the application with a few requests (say, not even 5), causes the second one to kick in. This is way below the queue length,set to the default 1000.
  • Requests coming from different "clients" do not have any direct implication on launching a new process for every new "client"
  • Invoking different methods/resources within the application does not trigger new worker process either.
  • After a period of time where all worker processes were idling (3 instances), a new request to the application will cause a 4th process to be created. This pattern repeats over time, even with minimum load, reaching up to 8 worker processes with all of them doing pretty much nothing.
  • There are no events in the EventLog for WAS or IIS that show any details in this regard, just a few #1001 "application 'xxxxx' started process..." when a new instance is created.

Any clues?

luiggig
  • 1
  • 1
  • "The reason for this is long-running requests, DB and network/async intensive, to avoid filling up the queue of the worker process, in a very busy application." That's the worst reason to use for an overall bad design. Long running requests should be moved to separate applications (ideally Windows service app) with scheduler. Then you have a simple web app to only serve HTTP requests, and all the issues you observed today can disappear. It is meaningless to pursue on the wrong track. – Lex Li Feb 19 '19 at 21:05
  • Thanks for the comment @LexLi. Are you saying we should implement a RESTful API through a Windows service? – luiggig Feb 20 '19 at 09:37

0 Answers0