What is a CPU tick?

39

12

Question:

  • How is a CPU tick calculated and what does it represent?
  • Does a single tick equate to 10 miliseconds thus if some thread reported not called for (5 * 10 ticks = 500 ticks) does this mean the CPU was perhaps too busy to schedule the aforementioned thread to work?

Aaron

Posted 2010-01-27T11:21:05.537

Reputation: 1 228

1could you put "CPU tick" in context - perhaps cut and paste the paragraph of the source of the phrase. I am concerned that there could be confusion between one of several possible answers. – Mick – 2010-01-28T09:44:19.507

Answers

34

A tick is an arbitrary unit for measuring internal system time. There is usually an OS-internal counter for ticks; the current time and date used by various functions of the OS are derived from that counter.

How many milliseconds a tick represents depends on the OS, and may even vary between installations. Use the OS's mechanisms to convert ticks into seconds.

As to why a thread reports it's not being called: That will depend on whether the thread is blocking somewhere (waiting, I/O etc.). If it is not blocking, then yes, the OS's scheduler will decide when it gets to run, which may be a long time if the system is busy.

Edit:

Note that, perhaps unfortunately, some authors also use tick as a synonym for processor clock cycle (e.g. this text). I believe this usage is less widespread, but still, best to find out first what people are talking about.

sleske

Posted 2010-01-27T11:21:05.537

Reputation: 19 887

So the CPU requires a fixed number of clock ticks to execute each instruction? – Aaron – 2010-01-27T13:12:44.717

1@aaron: no, instruction execution time is bound to a certain number of processor cycles. given a specific OS on a specific CPU running at a specific frequency, you can calculate how many ticks a specific instruction takes to execute, but that calculation isn't necessarily valid for any other combination of OS/CPU/frequency/instruction. – quack quixote – 2010-01-27T13:41:21.220

1@Aaron: No, ticks and processor cycles are two different concepts. Processor cycle lenght is determined by the hardware (CPU frequency); ticks are produced by the OS and use whatever length the OS (or its designers) deem appropriate. – sleske – 2010-01-28T09:03:39.020

Don't use datetime.ticks() as a gauge, it is deliberately constant by design, to represent the date and time of that object instance. It's time-dependant, whereas hardware ticks are time-independent. It's bad naming on the framework's part. – invert – 2010-01-28T09:10:17.913

1@sleske Hmmm I see the difference. 'ticks' used to be a synonym for CPU cycles, as well as a term for "a constant amount of time independent of CPU clock speed". Same term with two meanings. Is that right? – invert – 2010-01-28T09:14:23.490

@KeyboardMonkey: Yes. – sleske – 2010-01-28T12:10:26.363

@quackquixote That is not even true anymore. In a pipeline architecture, I can do multiple things in a CPU cycle sometimes, and in other times I can do only one, perhaps because the pipeline was filled with useless predictions that are wasted because the code took an unexpected direction, etc. You would have a hard time correlating CPU cycles to real time even for the same system running different code. – jobermark – 2015-01-16T17:39:04.977

For Linux actual implementations for number of clock ticks per second use a fixed value of 100.

See: https://github.com/prometheus/procfs/blob/master/proc_stat.go#L25-L40

– jhvaras – 2019-05-03T16:51:15.570

3

Edit: Taken from PC Hardware in a Nutshell:

"The processor clock coordinates all CPU and memory operations by periodically generating a time reference signal called a clock cycle or tick. Clock frequency is specified in gigahertz (GHz), which specifies billions of ticks per second. Clock speed determines how fast instructions execute. Some instructions require one tick, others multiple ticks, and some processors execute multiple instructions during one tick."


The time between ticks is determined by your clock speed, and it takes one to many ticks depending on the OP being performed. For example, a 286 class CPU needs 20 ticks to multiply two numbers.

If you need high performance timers, then I don't think you can rely on ticks being constant across all systems.

The CPU scheduler could have delayed the thread, especially if there was another thread with a higher priority. So yes, the CPU could've been too busy.

invert

Posted 2010-01-27T11:21:05.537

Reputation: 4 918

4-1 You are confusing ticks and processor cycles. ticks on a Unix system usually occur 60 or 100 times per second, and are not bound to proc speed. – sleske – 2010-01-27T12:33:30.313

1I'm talking about hardware ticks. a 1hz CPU processes 1 tick (cycle) per second, 200Hz can process 200 cycles per second, 2GHz two billion cycles/sec. The faster your CPU, the more cycles/sec you get. – invert – 2010-01-28T08:54:59.430

2Ah, I see, some authors use tick as a synonym for processor cycle. Still, I mostly see tick used as explained in my answer above. But I guess terminology varies, as usual :-(. I edited my answer. – sleske – 2010-01-28T09:08:10.243