When does CPU Usage starts to impact users or services?

1

I work with servers, and now, starting to gravitate more into server performance monitoring.

Some application developers that I've met lately claim that Windows/Linux Services and its applications (Web services, file servers, math applications, BI, Databases, etc) starts to suffer a considerable loss of processing power when the CPU usage reaches around 75%, even when there's 25% of processing power left.

Does the CPU usage really have an impact on the performance of an application after reaching 75%?

David Lago

Posted 2016-02-10T12:14:15.470

Reputation: 11

Question was closed 2016-02-12T14:03:27.903

The problem here is that it depends entirely on the workload. You can have something that uses 100% of the memory bandwidth available to the CPU but only use 30% of of the CPU time, Same goes for HDDs, SSD or even GPU resources. It's impossible to say definitively that 80%, 90% or even 100% CPU usage will impact a user or service on that system. Something using 100% of the CPU but at minimum priority won't affect processes at a high priority and so won't affect the system at all. A system has to be specified and configured to match its workflow, not assumed that "it'll do" if CPU < 80%. – Mokubai – 2016-02-10T13:37:55.580

Answers

0

With antivirus, you are probably IO-bound rather than CPU-bound. It's quite likely that any process which requires additional IO resources would work more slowly in that case. Even if your CPU is 100% used, you won't necessarily notice any slowdown at all, depending on the scheduler and the priority levels of the running processes.

But let's imagine the case where all processes are entirely CPU-bound. There's no IO happening at all, no unusual interrupts, etc. And further, let's imagine a single CPU. In that case, if your CPU is 75% occupied, you absolutely would have access to another 25% of CPU to process a call from the user. The latency would be a little higher than if the CPU was entirely unused; each context switch costs tens to thousands of nanoseconds. But this is fractions of a millisecond.

Note, though, that many processes do use significant IO resources. If you have two processes competing for IO resources, you may notice significant slowdown. Using an SSD instead of a mechanical drive will help considerably. You can also choose a different IO scheduler, at least on Linux.

ChrisInEdmonton

Posted 2016-02-10T12:14:15.470

Reputation: 8 110

Okay, you're right, AV was a bad example. We performed a test here with the following scenario:

We put a server with a single CPU and more than enough RAM memory. It runs a script that loops stressing the CPU with math calculations and when it reaches the end of the loop, it writes a datetime on a text file. The more I increase a parameter that adds "load" to it, the more CPU it uses. Then it reaches about 80%, for each 2 or 3 percent above it, takes an exponencial increase to the time it takes to write the datetime log. – David Lago – 2016-02-10T15:01:03.197

For example: 20% takes 20 seconds 25%, 25 seconds 50%, 50 seconds 80%, 80 seconds 82%, 85 seconds 85%, 98 seconds 90%, 130 seconds 95%, 210 seconds, 100%, 600 seconds.

I don't take it as a real proof, but it does happen.

I wonder why? – David Lago – 2016-02-10T15:04:25.907