One could compute the used CPU time, assuming the task scheduler balances the tasks across all cores evenly (or at least in a distributed manner) as:
CPU Time = Application Time * Number of Cores * Average CPU Utilization
The System Idle Process is just a wrapper for the task scheduler when there is no work to be done by the CPU, so in the case nothing else is running on the system, this usually has a very high utilization. Assuming you have a 4-core system, and your idle process takes up 95% of the CPU, every second you would expect the idle timer's CPU time to increase by:
CPU Time = (1 second) * (4 cores) * (0.95) = 3.8 seconds
Note that as we get better processors and as our operating systems become more optimized, this would theoretically max. out at 100% (e.g. at idle, the CPU has literally NO work compared to it's capabilities), in which case you would expect the CPU Time for the idle process to simply increase at real-time multiplied by the number of cores.
Note that this formula applies even for single-threaded applications, since if a single-threaded application is running constantly on a 4-core machine, the maximum processor utilization would only be 25%; thus, the CPU time for that single-threaded application should nearly match real-time:
CPU Time = (1 second) * (4 cores) * (0.25) = 1 second
Just one quibble: The Idle Process isn't just a "wrapper for the task scheduler". There is an actual idle thread dedicated to each CPU (or for each logical processor if you have HT enabled) and the thread scheduler (not "task scheduler", that's the "scheduled tasks" thing) really does context switch to it when there's nothing else for a CPU to do. Nor do the idle threads really do nothing; they help run other CPUs' DPCs, they notify the power manager of the core's idleness, etc. – Jamie Hanrahan – 2015-09-13T05:45:40.803