Are DPC latency issues improved with multiple cores?

1

DPC latency checker says

A device driver cannot process data immediately in its interrupt routine. It has to schedule a Deferred Procedure Call (DPC) which basically is a callback routine that will be called by the operating system as soon as possible. ... There is one DPC queue per CPU available in the system. ... If any DPC runs for an excessive amount of time then other DPCs will be delayed by that amount of time. ... Unfortunately, many existing device drivers do not conform to this advice. Such drivers spend an excessive amount of time in their DPC routines, causing an exceptional large latency for any other driver's DPCs. For a device driver that handles data streams in real-time it is crucial that a DPC scheduled from its interrupt routine is executed before the hardware issues the next interrupt. If the DPC is delayed and runs after the next interrupt occurred, typically a hardware buffer overrun occurs and the flow of data is interrupted. A drop-out occurs.

Stack Overflow answer says:

A DPC is queued into the global DPC queue, and can be run on any processor. So if you really have a long (-running) DPC on one core, the other core is free to process another. So any timing information is really dependent on the count of processors you have and how many things get currently executed concurrently. So on multicore processors these numbers might vary widely.

Generally what I've read is that a fast dual core is better than a slower quad core for audio, since most audio apps aren't optimized to use more than one core.

But in modern computers it sounds like DPC issues are bottleneck for audio production. Does this mean a quad core processor would be better than a dual core? Other free cores could theoretically handle the audio DPCs while one is locked up by a rude Wi-Fi DPC routine. Is the queue shared between cores, and DPCs can be shuffled around to whichever one is free? Or is there one queue per core, allowing for a core to be hijacked? What about virtual cores?

endolith

Posted 2010-09-16T02:32:25.010

Reputation: 6 626

If you had both systems you could use this tool to compare and get real world data...http://www.thesycon.de/eng/latency_check.shtml

– Moab – 2010-09-18T22:36:08.637

Answers

0

Latency in a Deferred Procedure Call (DPC) is caused by a driver taking a long time to do its thing.

Adding more CPUs will not improve the time a poorly written driver takes to do its processing.

Ian Boyd

Posted 2010-09-16T02:32:25.010

Reputation: 18 244

Can you back up your opinion with facts? There's a separate DPC queue for each CPU. – endolith – 2011-05-06T04:41:49.093

i cannot. Kernel mode code cannot be multi-threaded. And the only time there's an issue with DPC latency is when a poorly written driver is behaving poorly. Throwing more cores at it won't make it run gooder. – Ian Boyd – 2011-06-15T02:15:29.173

The question asked was not "will more CPUs make a bad DPC run faster?" The question is "will more CPUs let my audio driver run when a badly behaved driver is also installed in the system?" – Colin Jensen – 2013-08-27T22:46:46.750

Then his only option is to buy the faster machine and observe for himself that things are not any better

– Ian Boyd – 2013-08-28T01:17:19.080