Why won't my CPU operate at its max potential even when my application (which utilize CPU's resources) is lagging?

30

15

Why does my CPU never max out even when my application, which eats up 40% of the CPU (but 30 - 40 % of the CPU still stays idle), being laggy?

Does that mean:

  1. There's a way to force the CPU to run at 100%
  2. CPUs are manufactured that way and the only thing I can do is purchase new hardware.
  3. The limitation lies with the application and the lag won't improve even with better CPU. (Assuming that the application is perfect, what I'm asking is whether software run that way?)
  4. Something else.

Nhu Thai Sanh Nguyen

Posted 2017-11-02T11:45:00.510

Reputation: 517

1An application is only going to use the resources it requires. – Ramhound – 2017-11-02T11:49:37.263

@Ramhound, but it's being laggy, so it must require more, correct? – Nhu Thai Sanh Nguyen – 2017-11-02T11:52:30.477

Use an CPU stress tool an check if the CPU run at 100%. (https://www.mersenne.org/download/)

– Joe6pack – 2017-11-02T11:52:53.960

4I understand what OP is getting at, I despise how when antivirus or windows update is running it completely bogs my system down, maxing out that one single core....leaving the rest of them to do what? I end up sitting for 12 minutes until I can get back to work. – None – 2017-11-02T14:11:51.627

10Any application that maxes out even one of the several shared resources in a computer (CPU, storage, memory, or network) can make the entire system slow, despite the remaining shared resources having unused capacity. – I say Reinstate Monica – 2017-11-02T15:12:50.850

78I've been programming for 25 years. And I can assure you - this is not a conspiracy. Its rare to find applications where the CPU is the bottleneck, most of the time applications are waiting on other things like the hard drive, RAM, or network. – Contango – 2017-11-02T15:51:38.410

10Suppose you are doing some computational work -- doing your taxes, lets say. If you do zero work while you are waiting for your tax forms to arrive in the mail then you will do zero work for a long, long time. Laggy apps are often badly written; they block the UI thread on high-latency operations like disk or network IO that have nothing to do with the CPU, and so the CPU is idle and the app is unresponsive. Getting a faster CPU doesn't help; that just gets you to the blocking high latency operations faster. – Eric Lippert – 2017-11-02T16:52:55.653

@EricLippert This happens to me all the time when I'm driving. Except it's people with their fast cars that get to the stoplight before I do. – I say Reinstate Monica – 2017-11-02T16:59:20.983

3@TwistyImpersonator: That is a good analogy for high-contention multithreaded programs. We sometimes see that a multithreaded program will run slower on a faster CPU because the CPU is getting more threads into a blocked state faster than a slower CPU would. If every driver in New York City was given a 10x faster car tomorrow, commute times would not improve. They would get worse. – Eric Lippert – 2017-11-02T17:17:08.370

I suggest making sure all drivers, especially chipset drivers, are up to date to make sure the OS is interacting well with your hardware. – Derkooh – 2017-11-02T19:09:48.030

Heck, just read what these people monitored and optimized through!!

https://medium.com/netflix-techblog/serving-100-gbps-from-an-open-connect-appliance-cdb51dda3b99

– Pysis – 2017-11-03T17:52:20.920

1@SiXandSeven8ths Hard drive seek time is almost certainly your problem in the case where you are experiencing an unresponsive system during virus scan. Buy an SSD, you will never regret it. – trognanders – 2017-11-03T18:49:48.373

@BaileyS, I have SSDs. Resource hogging Malwarebytes and Windows Defender are the culprits. Scanning is a bitch apparently. Firefox and Chrome crap things up from time to time too. Fresh Windows install. Older hardware though. – None – 2017-11-03T19:16:06.307

1As an aside, in any multitasking operating system, it is literally impossible for the actual usage to be 100%, because things like drivers, interrupts, messages, task switching, and so on will consume some part of your processing power. Windows, at least, seems to report 1-2% CPU usage for itself, so hypothetically you'd see at best 99% usage even if the program could use all the cores and threads available in hardware. – phyrfox – 2017-11-03T20:22:46.723

Answers

94

You're probably running single-threaded applications which can only max out a single CPU core. Since 100% of one core is less than 100% of the capacity of multi-core CPU, total CPU utilization doesn't reach 100%.

You can confirm this by viewing the individual core utilization in Task Manager. Look for single cores that are approaching max utilization.

I say Reinstate Monica

Posted 2017-11-02T11:45:00.510

Reputation: 21 477

25Because of switching, you will only see distributed usage that sums up to the use of one core. Basically, the app uses exactly one core, but it hops around between the cores, so each one averages out to 1/n. – Aganju – 2017-11-02T12:01:30.073

62I would also say it's possible that the cpu isn't actually the bottleneck. – None – 2017-11-02T18:15:11.020

2Certainly possible and even likely but I would have expected 30-40% seems a little odd. For a Dual Core system 1 cpu maxed out would obviously be 50% and for a Quad Core system 1 CPU would be 25%. Maxing out 1 core being 30-40% would be a Triple core system which is pretty uncommon. – Evan Steinbrenner – 2017-11-02T20:19:17.857

16I vaguely recall Dwarf Fortress famously bottlenecking one core at 100%, so he began forking off other bits into a second thread, leading to the "main" thread locked at 100%, and the "background" thread hovering around 20-60%. On a quad core, that's... 30-40%. – Mooing Duck – 2017-11-02T20:45:18.993

3Don't forget Turbo mode on CPUs. Taskmanager doesn't take that into account when computing the load percentage. On my i5-4570S I see often a load of about 30%. Thats one core (25%) with regular 2,9 GHz turboed to 3,4 GHz. 25*3,4/2,9 is nearly 30%. With a higher spread between normal and Turbo frequency we can get higher. – Sunzi – 2017-11-03T11:15:44.297

7@AytAyt - I'd go a step further and say it's not just possible, it's almost certain. Unless the OP's application is doing pure number crunching (or using spin-locks everywhere), it's actually quite difficult to fully load a CPU, even with a multithreaded program. Any disk or network I/O will leave idle cycles, and given the OP's mention of a "lagging" app it seems probable that there is some network communication in play. – aroth – 2017-11-03T14:42:56.007

2Why is this answer upvoted? A CPU hog is rarely an issue. A much more probable cause is that it freezes while waiting for something (disk, network). – Oskar Skog – 2017-11-03T20:43:48.060

@Aganju, if the operating system does that, it is obviously insane and mentally challenged. Due to L1 (and L2, usually) caches being core-local, moving a thread between cores incurs significant penalty, so any CPU-intensive process should stick to one core if possible. – Jan Hudec – 2017-11-04T17:30:23.373

1@JanHudec You're right about the penalty, but some OSs do it anyway at times. How bad the penalty ends up being varies a lot on the workload in question. I've had significant performance increases at times by manually setting thread affinity to one core. I'm not sure what the reasoning is for the scheduler moving it around, though I wouldn't be surprised if it has something to do with asymmetric core heating. I imagine one core running at 85 C while the others run at 35 C probably isn't great for the chip. – reirab – 2017-11-05T05:16:06.997

1I'd also like to note that I am developing a program on a quad core system that maxes out one CPU core and doesn't touch the others. I see the exact performance that OP is seeing for this specific reason. – Byte11 – 2017-11-06T02:02:26.350

49

You haven't specified your OS. So the answer will be common like.

Applications can be limited by various reasons. The bottleneck can be in:

  • CPU
    • low speed
    • single/low threaded apps (not capable of using all cores/threads)
  • I/O
    • disk throughput
    • disk latency
    • network throughput
    • network latency
  • memory
    • capacity
    • throuthput
    • latency
    • insufficient cache
    • locality (NUMA)
    • swapping

And there are more reasons, which aren't so common.

So have a look at your system resources and try to analyze your system for other bottlenecs, than just total CPU load.

Jaroslav Kucera

Posted 2017-11-02T11:45:00.510

Reputation: 1 352

9Also: Video Card has separate GPU, I/O, and memory, any of those might also be the issue. – Mooing Duck – 2017-11-02T20:45:33.623

2@MooingDuck True, but that would usually only be an issue if the application in question is actually using the GPU (i.e. a 3D game or a CUDA/OpenCL app or some such thing.) – reirab – 2017-11-03T23:21:55.077

13

In general, when people talk about their computer being slow, I mention dust. As a former computer tech with 15 years of professional experience, I found that simply blowing out dust can significantly improve performance.

I'm not talking about a thin, almost imperceptible amount of dust, but rather large clumps or even mats that prevent normal airflow. I've seen heat sinks that had basically a filter over them that was dust, rather than an actual filter. This blocks a very significant amount of air from ever cooling the CPU. Removing dust like this will tend to quiet fans instantly and allow your components to survive longer. Heat had killed many a computer I was asked to fix.

Going along with the heat issue idea, you might also try better thermal paste. The white cr@p most processors come with is like the Yugo of thermal paste. I use Arctic Silver, but there's better stuff than that, even. Arctic Silver is about a Porche (using the car rating scale), but there are Ferrari's and supercar varieties out there.

Processors tend to slow down when they are overheating. This is a physical thing as well as a "self preservation" programmed into many CPUs. I don't know if it'll still show 100% on the Task Manager or if it'll show 40% (like you see), but it can be a significant slowdown while the CPU tries to let the heat sink and fan "catch up."

Another thing that could be slowing your CPU down is the GPU. If you are running graphic intensive games or utilities (like CAD), your GPU might be holding back your CPU. Getting a better video card might be something to look at. Also, using the wrong right card might be holding you back. Gaming cards aren't (usually) designed to work with CAD as well as workstation cards, and workstation cards (usually) won't game very well, either. Some do, but most don't.

As @Jaroslav Kucera mentioned, it could be disk related. Hitting the HD(s) a lot can slow you way down. I normally run multiple drives. One dedicated to the OS and other(s) for software, Windows page file, personal files, etc. Besides not having to worry so much about backing personal data up in case of OS failure, having multiple HDs spreads the workload considerably. Reading and writing to the same disk at the same time can seriously slow down the HD. Using SHDs can mitigate this, but not entirely. Photoshop and video editing software are known to hit HDs hard. Reading from one HD then outputting to a 2nd HD is the way to go. This also helps the life of your HDs. I also go with active cooling on my HDs. I haven't killed a hard drive since I put a fan and heat sink on them +15 years ago. Google them, they are cheap insurance.

Believe it or not, your PSU might be slowing you down, too. If you don't have enough power (or your PSU is old or a cheap, over rated, Chinese POS), you can have performance issues. I've seen first hand what odd OS issues a flaky PSU can do. You are looking for voltage as well as amps, so make sure they all match the specs on the PSU, if you go this route, and also make sure they meet or exceed your power needs. If your components total 500 watts and you're giving them even 475, that's bad. I recommend going over your requirements by about 20%, so as your PSU gets older (and drops power) and your other components get older (and require mote power), you aren't stuck buying a new PSU so quickly.

Including the other answers here, there are still more reasons for your computer to run slow. Except for the PSU option, what I talked about were very commonly seen when I was a computer tech. Doing a benchmark and other tests are the only way you'll be able to figure things out. Swapping parts might not even solve the issue if it's a combo of multiple parts causing the slowdown.

And, AFAIK, there's no way to force your computer to use 100% of the processor. The CPU and OS knows what they need to do and are really good at their jobs, usually. :-) I don't think anyone has yet figured out a way to force feed a CPU to make it run at 100% when you think it should. At least not without feeding it extra junk to make the percentage "look good."

With you seeing 40% and not and a whole number division of 100% (like 25%, 33%, or 50%), I have a feeling it's not a single threading issue. It could be, but that's not where my mind goes. +1 to @Twisty Impersonator for bringing it up right away, though.

Good luck trying to figure this one out! I've spend days trying to figure this kind of thing out, only to end up replacing most of the guts as a "last resort."

computercarguy

Posted 2017-11-02T11:45:00.510

Reputation: 800

1+1 for pointing out the possibility that an application can get hung up on a maxed out GPU. – I say Reinstate Monica – 2017-11-02T18:18:05.080

6I forgot to mention, smoking near your computer is one of The Worst things you can do. It leaves a nasty, gross, and disgusting (can't emphasize that enough) orange-ish sticky mess that can't be cleaned off. Dust becomes caked on and impossible to clean off. You might be able to get it off with an auto-parts oil bath or a sonic water bath, but I never went through that trouble. Even cleaning the case is an effort in futility. – computercarguy – 2017-11-02T18:37:30.153

2Just from a developer's perspective, the CPU will do whatever you tell it to. If it's not maxed out at 100%, it's because your program is waiting around on other stuff to happen (disk IO, network, user input, system messages, etc). If you have something for the CPU to do, it will automatically use 100% (assuming a multithreaded application) to do what your program needs - you don't have to "make it" use 100% or unlock it or something. – JPhi1618 – 2017-11-02T20:38:04.247

2@JPhi1618: You're forgetting heat. Regardless of what you tell it to do, if the CPU is overheating, it'll throttle itself to run at far less than 100%. – Mooing Duck – 2017-11-02T20:47:26.553

1@MooingDuck, now I'm wondering what Linux and Windows actually show for CPU usage when the hardware is thermally throttling itself. – JPhi1618 – 2017-11-02T20:51:06.393

For the heat problem, it's pretty simple (at least in Linux - don't see why it shouldn't be in other OSes) to install a system monitor (I use conky, but there are others) that reports CPU temps (113.0F on my system right now), as well as other things like fan speeds. – jamesqf – 2017-11-03T03:24:14.977

@JPhi1618: You can check CPU temp easily in software. If it's under 80C, you're probably not throttling. (But if it's over 70 to 75 under anything but max-power loads like prime95 or maybe video encoding, check your cooling setup for dust.) I think there are ways for software to detect that thermal protection throttling happened, but I'm not sure what. There isn't a hardware performance counter for it (like there is for cache misses). You can of course get the core clock frequency and see if it's lower than you expect under a load that keeps a core busy.) – Peter Cordes – 2017-11-03T07:04:55.097

Possibly also useful: https://wiki.ubuntu.com/Kernel/PowerManagement/ThermalIssues

– Peter Cordes – 2017-11-03T07:06:55.557

9@JPhi1618 CPUs are usuallt throttled by scaling their frequency (underclocking), so instead of running at, say instead of 3.0GHz, they run at 2.0GHz. So thermally throttled CPU still may report 100% load, as every "work slot" is occupied, just there is fewer "work slots" in unit of time available. – el.pescado – 2017-11-03T09:15:36.353

"I've seen first hand what odd OS issues a flaky PSU can do." - first example: "My disk is broken". No, your power supply can't give enough power. – Michał Leon – 2017-11-03T09:25:01.963

When I looked at comparisons, Arctic Silver was remarably poor. It was first in the market, but has fallen behind in terms of performance. – JDługosz – 2017-11-03T10:42:13.770

@JDługosz, that's like saying the average Porche is junk when comparing to NASCAR and Formula series cars. For my money, Arctic Silver is still a much better alternative than the white "might as well be shaving cream" stuff that still comes with most processors today. Arctic Silver is also reasonably priced. – computercarguy – 2017-11-03T13:27:52.687

@computercarguy actually, the generic goo is often much better than it was back when needing it was a new thing; it will be more fit for purpose. But, without branding you don’t really know. Arctic Silver is overpriced, actually. – JDługosz – 2017-11-03T17:49:10.947

-1. Too many sweeping generalisations, poor judgement and outdated information. Too many opinions with no facts or reasoning. I know 'my computer is slow' is a broad subject, but half of what you covered above is badly paraphrased, badly worded and rambling conjecture. – HaydnWVN – 2018-03-27T16:14:32.887

4

It could be energy saving settings in either bios or the operating system. Many modern CPUs and motherboards have settings to allow the CPU to be more economical with electricity usage, (especially true for laptops which want the battery to last). You can probably turn such a setting off, but make sure you know what you are doing as next to the setting there are usually other stuff that can affect the functionality of the computer in other important ways.

mathreadler

Posted 2017-11-02T11:45:00.510

Reputation: 171

2

I regularly hit 100% utilization when doing rendering and math tasks. I'll verify that hyperthreading will hit high 100%, and instruction ordering is a big deal. Intel and AMD both have large amounts of hardware dedicated to instruction reordering to fill as many execution cores as possible. If you're getting 30% on a modern machine you may

  • Check Temps -- Intel & AMD both downclock when they get hot and it shows as stutters and spikes.
  • Not be doing much with it.... examples are:
    1. Web Browsing
    2. Email
    3. Most simple games
    4. I'd almost guarantee your problem is one or all of the following, starting at the top:
  • Get an SSD
  • Get an SSD
  • Get your OS on the SSD and move normal data to a traditional multi-TB drive. Windows needs more acccess to its local files than anything.
  • Bonazai Buddy?
  • Keep all drives at above 10% capacity at least. NTFS is a journaling file system and performance goes down the fuller the drive gets.
  • You need an NVMe drive / SSD for your OS drive ASAP (yeah I said it again). The performance is amazing, and it carries on to part two of this... A major retailer was selling Samsung 961 NMVe 512GB drives for $300 today which is plenty for normal use.
  • Windows 10 is GPU heavy. A cheap dedicated video card can take the load off both memory and CPU. You can still use the APU in combination with the video card but you'll save some RAM, and VRAM is generally much faster.
  • Lower core-count CPUs are also memory bound. If you look at the i7s they're all running quad channel DDR in 4 banks. AMD's Epyc chips will be 8 channel DDR5 with 64 cores. Doesn't help. Finally, and I can't stress this one enough, dump the money on as much RAM as your machine will take. I've got 32GB and am buying 32 more later this year. Windows does something similar to superfetch that's a bit newer which compresses memory in RAM that isn't being used so programs and data can just be unzipped when needed. As another example I run a Linux VM for development, allocated 6/12 cores and 16GB ram, and after the first load off the SSD it starts in ~3 seconds. CPU is considered very cheap these days by optimizations like that... decompressing Photoshop from memory is faster than loading from disk except in the case of very fast SSD.

All of this stuff seems like overkill until I'm stuck compiling a 70k file project or upscaling giant camera raw files to 17"x26"x600 dpi at 16 bit color. Even at 100% utilization the resources are such overkill that you don't get slowdown. The other night I realized I had two VMs and a Wolf 2 loaded along with 2 IDEs (I'm distracted, sue me) and wasn't noticing slowdown. This is a ~$1500 machine BTW, nothing special, and most slowly bought over the years. Half of that is one of the Radeon RX64s because my video card was 6 years old. Huge difference in render and such. Upgrading will likely get you more use out of your hardware than assuming that your 30% use is all you'll get.

If I threw a 5400RPM hard drive in this machine for OS it would run like total crap.

TL;DR it sounds like you're CPU bound right now. Spend a couple hundred on at least a 256GB SSD for the OS, 8GB of ram, and a lower end gamer card and the computer will last years. This one survived for 6 years before I finally did a processor and Mobo refresh and I was compiling an entire cross compiler suite about 25 times a day with the old gear.

Call me overkill but I'm not recommending 8 Tesla cards or anything. :-) Do minor upgrades when you can and I think you'll solve a lot of these problems. I did years ago by adding an SSD to a Q6600 system and watching performance triple.

JimmyShinny

Posted 2017-11-02T11:45:00.510

Reputation: 21

1

Without knowing the specifics of your program, it's hard to tell, but since another answer looks at the possibility of the application being single threaded, I'll look at the application as if it's using proper multithreading.

A common thing that gets overlooked is physical cores versus "hyperthreaded cores." Hyperthreading excels at many short tasks with bottlenecks other than CPU. For tight-looped CPU bottlenecked tasks, you are still limited by your physical core count, which is generally half your hyperthreaded core count. In the absolutely worst case scenario, your task manager may only show 50% usage because it counts hyperthreaded cores in its graphs, when in reality, your physical cores may be at 100% usage. Generally, you'd show more than that, though, as your operating system will be able to use the hyperthreading for other unrelated tasks.

user417541

Posted 2017-11-02T11:45:00.510

Reputation:

Wouldn't "proper multithreading" mean having a thread with work to do for every logical core rather than every physical core? If you're running a tight loop on every logical core, Task Manager should report 100% with hyperthreading. AFAIK, the "percent usage" in Task Manager is based on the amount of time the thread in question was in the runnable state and scheduled on a logical core, not necessarily the amount of time it was actually, say, doing something on an ALU. The OS probably wouldn't even know that (only the CPU microcode would.) – reirab – 2017-11-03T23:30:29.810

1"Regular" machine code only uses 2 to 3 of the 6 or more instruction ports on modern CPUs. Not to mention all of the pipeline stalls caused by branch and cache misses. Hyperthreading helps fill in those gaps. It's almost always a win to use it. Some types of code don't do well with it, like video encode / decode or heavily optimized matrix math. But those are unusual. – Zan Lynx – 2017-11-04T17:44:12.100