How does the CPU cache fit in this?

1

So I understand a little bit on how it's all wired together. Instructions are stored in Registers and the EIP/RIP points to Opcode that manipulates those registers (program code).

However I can't understand the relation of the Large cache with the finite size registers that are available.

Nocturnal

Posted 2014-01-10T13:06:52.950

Reputation: 113

Answers

3

Well... when you request something from the main memory, the CPU will look in the cache first. If it finds it there (cache hit), it's nice and fast. If not (cache miss), it has to go to the slow RAM. And then might add it to the cache, in the process kicking something else out, to speed up future accesses.

Modern CPUs can have per-core caches, caches shared between a couple of cores, and caches for the whole CPU. Usually, the smaller, more specific caches are much faster - but expensive and/or take too much space, so you end up with multiple levels of cache, each level slower and bigger, and the very big and very slow main RAM. The registers are the fastest, but they have a tiny amount of space, so you can't store all that much on them - necessitating RAM and caches for it.

That's a high-level view. Maybe not low enough judging by the rest of your question.

Bob

Posted 2014-01-10T13:06:52.950

Reputation: 51 526

2

So I understand a little bit on how it's all wired together. Instructions are stored in Registers

No, instructions are stored in memory.

and the EIP/RIP points to Opcode that manipulates those registers (program code)

EIP/RIP points to the memory location where the CPU is going to fetch the next instruction. It increments by 1 or more when the instruction is retrieved and may change entirely upon a branch, jump, or interrupt.

Opcodes are the part of the instruction that actually tell the CPU what to do. Many instructions (not all) consist of an opcode (again, the actual instruction or "command") and data needed by the opcode (an "operand").

Some opcodes manipulate registers directly (i.e. MOV AX, immediate), many manipulate registers indirectly as side effects (most math instructions affect the FLAGS upon certain conditions), and some do not manipulate registers at all (not very many) aside from the instruction pointer.

Cache is a transparent layer between the CPU and RAM. The CPU checks cache first - when executing an instruction that reads memory - since it's much faster and tries to keep it loaded with often-used data. In an assembly language program you don't have to use special instructions for this to happen.

Register accesses don't have anything to do with the cache. Registers may be involved in an operation that computes what memory location to access, i.e. registers can be used as pointers and indexes ("indexed" or "indirect" addressing), but addresses can also be specified directly in assembly language instructions (this is the "absolute" addressing mode). Now x86 CPU's have a transparent "register renaming" feature that is used to support out-of-order execution but it's not linked to the on-chip cache.

In x86 there are a few instructions that clear the cache, and some CPUs let you configure the cache as "cache-as-RAM" - where the CPU only uses the cache and doesn't go to the RAM at all. This is useful if code needs to be run before RAM is initialized, such as when a system firmware or OS starts, or without disturbing RAM, such as a OS crash handler.

LawrenceC

Posted 2014-01-10T13:06:52.950

Reputation: 63 487