Short answer, as people have written whole books about this topic:
Cycles are usually expressed with the terminology "CPU time". A process is allocated a certain number of 'quantums' on the CPU. Keep in mind that in CISC architectures (like x86) a single 'instruction' can take up several cycles.
In the simplest round-robin schedulers 50% of CPU time would be allocated to every process (if there are 2), so each task would take twice as long if it only depended on CPU.
However, all modern operating systems have more advanced scheduling algorithms, which take one or more of the following parameters into account:
- Priority
- State (blocked for I/O, runnable, stopped, etc.)
- Last time the process ran on the CPU
Depending on the scheduling algorithm, processes will be allowed to run on the CPU until:
- It requests I/O and will be blocked by it
- A process with higher priority needs CPU time (pre-emption)
- A hardware interrupt is triggered.
- The process' quantum runs out.
How the final queuing is organized depends on the scheduler. The Linux scheduler is described here.
Thinking about cycles is not really useful in the context of a real world computer. A single instruction, as stated above, can take multiple cycles on CISC architectures. The CPU will also use cycles to switch between processes (changing registers, updating the MMU, etc.). Cycles as such are not a really good metric to use when describing CPU usage.
The useful output of a CPU is mostly determined by how quickly it can handle processing requests. When processes are ready for processing they are queued in the run queue, until they can be handled by the CPU. As long as processes don't have to wait more than a quantum (~200ms) to be run, the system is essentially as fast as it's going to get. In the OS this is displayed as a number (either as a float (Linux: top
/uptime
) or percentage (Windows Taskmanager).
To give a short answer to the question you put in the title: utilization gets high because the run queue is increasing. The 'utilization' you see in Windows or Linux is just an average of the length of the run queue over time.
Thanks mtak. With your answer that Cycles are usually expressed with the terminology "CPU time" then cpu cycles and cpu utilization are both different terms. Can you put some light here what is cpu utilization which is the intent of this question ? – user3198603 – 2016-08-24T16:28:58.940
I updated the answer, but you're not really asking a specific question. I am presuming you're not an expert on computer architecture (excuse me if that's not the case), so I believe you're asking a question that is not really relevant. I gave all the extra info so you might get more insight in and understanding of how an OS schedules processes (and how that relates to actual performance/metrics you can view in the OS). – mtak – 2016-08-24T17:58:27.653
1Hi mtak I read bit more reources on net and get the concept when CPU utilization gets high. Say when i see task manage if if CPU utilization displayes 70%, it means in last x seconds cpu was working 70%of time in x seconds or just 30% free in last x seconds. I get it now – user3198603 – 2016-08-25T05:28:20.560