Can we run linux in something faster than RAM?

21

3

This is perhaps a silly question, and may be the result of a misunderstanding. I'm studying CPU's right now, and memory in particular. I was just reading about how much faster SRAM is than DRAM but more expensive. SRAM is very expensive: I shopped for a bit and found a battery powered SRAM card with 16 MB for around $400.

Recently a friend mentioned he has been running puppy linux in RAM, and that it is fast. I noticed, though, that tiny core linux can be even smaller... as small as 8 MB! This got me thinking: can we run linux in SRAM? Is that question even well-formed?

Googling this question proved ineffective, but it raised yet more questions. Could one run linux in L3 Cache? Intel Core i7 can have an L3 Cache big enough to fit the 8MB... but am I making a categorical error? What is the difference between this and 'embedded' linux?

That's the question: can we run linux in SRAM or L3 Cache? Is there anything faster? How fast can we linux!?

z.

Ziggy

Posted 2013-03-25T00:08:42.777

Reputation: 826

3Embedded linuxes are often run on ram, or non volatile memory. Embedded linuxes are often just stripped down to run on specific hardware only, or use some less common kernel options like low latency – Journeyman Geek – 2013-03-25T01:35:31.457

2I wonder if there is any practical use for this question? – Robert Niestroj – 2013-03-25T12:15:39.920

4+1 for using "linux" as a verb (in the last sentence)! – Vorac – 2013-03-27T10:03:46.017

Answers

20

Linux, or any other OS does not know how the RAM works. As long as the memory controller is properly configured (e.g. refresh rates set for non-SRAM) then the OS does not care is it runs on plain dynamic memory (plain RAM), fast page mode RAM (FP RAM, from the C64-ish times), Extended data out mode RAM (EDO) , synchronious RAM (SDRAM), any of the double data rate SDRAMS (DDR 1/2/3) whatever.

All of those support reading and writing from random places. All will work.

Now cache is a bit different. You do not have to write to it for the contents to change. That will get in the way. Still, it is somewhat usable. I know that coreboot uses the cache as a sort of memory during boot, before the memory controller is properly configured. (For the details, check out the videos from the coreboot talks during FOSDEM 2011).

So in theory yes, you could use it.

BUT: For practical tasks a system with 1 GB 'regular' 'medium speed' memory will perform a lot better than with only a few MB super fast memory. Which means you have three choices:

  1. Build things the normal 'cheap' way. If you need more speed add a few dozen extra computers (all with 'slow' memory)
  2. Or build a single computer with a dozen times the price and significantly less then a dozen times the performance.

Except in very rare cases the last is not sensible.

Hennes

Posted 2013-03-25T00:08:42.777

Reputation: 60 739

6Many CPU's support a "cache-as-RAM" mode through the CPU's model specific registers (MSRs). Also note that SRAM consumes more power than DRAM and that is also a design factor. If the CPU's cache was big enough or the kernel small enough you could enable this cache-as-RAM mode and keep it executing entirely in SRAM on the CPU. You would have a limited amount of RAM to run programs, etc. though. because AFAIK cache-as-RAM and normal mode will not work simultaneously. I could be wrong about that though. Even if it did, most of a CPU's speed these days is due to use the L2, L3 cache. – LawrenceC – 2013-03-25T02:38:36.740

@Hennes is it that Linux only cares about (mapped) memory addresses? – Alvin Wong – 2013-03-25T07:02:39.863

SDRAM is Synchronous D(ynamic) RAM, whereas SRAM is Static RAM. I don't know which one you meant to refer to in the first paragraph and I don't have the rep to make "trivial" edits, but maybe you could fix that? Other than that, good answer. – a CVn – 2013-03-25T08:40:23.107

I do not mind clarifying, but I am I not sure what you want to have clarified. Can you add that in a comment and I will edit it. – Hennes – 2013-03-25T17:44:15.837

When I first read this comment I saw, "Linux, or any other OS dies not knowing how the RAM works."

Your breakdown is a good one: I think I had no illusions that this would be "better". I just wondered whether it could be done. – Ziggy – 2013-03-25T19:15:35.810

8

Yes, you can, and this is in fact how it's already done, automatically. The most frequently used parts of RAM are copied in cache. If your total RAM usage is smaller than your cache size (as you suppose), the existing caching mechanism will have copied everything in RAM.

The only time when the cache would then be copied back to normal RAM is when the PC goes to S3 sleep mode. This is necessary because the caches are powered down in S3 mode.

MSalters

Posted 2013-03-25T00:08:42.777

Reputation: 7 587

1Not all can/will be copied. For Intel/x86 cache structure: If I have 256KiB cache and 1024KiB cache I can read address 0. It will be stored in the cache at location 0. I can then read address 1, and it will be stored in the cache at location 1. However if I read the address from (256Kib+1) that will also be stored at address 1 in the cache. The cache uses an extra (tag) SRAM to indicate which of the two is stored. This means that reading from multiples of the caches size will not work well. (Note that this would be a rare thing and can usually be ignored). – Hennes – 2013-03-25T17:37:14.390

This is insightful! Why would I clumsily stuff what I think is important into the L3 Cache when I could let an army of geniuses determine the optimal thing to do, and program a CPU to do that optimal thing. Right? – Ziggy – 2013-03-25T19:12:40.317

3

Many CPUs allow the Cache to be used as RAM. For example, most newer x86 CPUs can configure certain regions as writeback with no-fill on reads via MTRRs. This can be used to designate a region of the address space as - effectively - cache-as-ram.

Whether this would be beneficial is another question - it would lock the kernel into RAM, but at the same time would reduce the effective size of the cache. There might also be side effects (such as having to disable caching for the rest of the system) that would make this far slower.

Marc Lehmann

Posted 2013-03-25T00:08:42.777

Reputation: 31

2

"can we run linux in L3 Cache?"

No, this is not possible because the cache memory is not directly/linearly addressed.
Due to the way the cache memory is designed, the CPU Program Counter (IP) registry cannot point to a location in the cache memory.

A CPU cache have it's own "associativity" and this associativity define the way the "normal" memory is "mapped" to the cache memory. This feature of the cache memory is one of the reason the cache memories are so fast.

Max

Posted 2013-03-25T00:08:42.777

Reputation: 988

1

"can we run linux in L3 Cache?"

No, Cache is there for a specific job of holding program data and instructions ready for when the processor is going to need them. You'll find the operating system in the cache anyway because its constantly being used. Loading all the OS into cache isn't efficient since you are not using every code path in the kernel at once.

"can we run linux in SRAM?"

Certainly you could use battery backed SRAM as your boot partition, you could then used the embedded flag of execute in place. That might lead to faster boot times and slightly faster operations. However a major factor is the bandwidth between the L3 Cache and where the kernel is (a boot drive or RAM).

"Is there anything faster? How fast can we linux!?"

Generally hardware manufacturers and operating systems developers are working to make processing as fast as possible. However your question is very general, do you want to speed up boot times, optimize file system access, speed up computations or something else. Once you have a more specific question you can certainly start to find the bottleneck and remove it. Your SRAM drive would certainly speed up your boot process. Getting to a GUI in 3 seconds would be very cool to see.

Phil Hannent

Posted 2013-03-25T00:08:42.777

Reputation: 1 080

1

Back in the days of 486es there used to be machines where all of the RAM was SRAM. This is back when 8MB was a lot, but seems to match your constraints. I'm sure 8MB of SRAM is much cheaper now than back then.

So, you could run Linux in SRAM if the machine was made that way. It's not a theoretical; it's been done.

But, not in Cache. Cache is wired differently, and more importantly addressed differently. You can't address it the same. Chunks are mapped in differently, not as a continuous chunk. And the contents aren't necessarily what you see on disk - newer Intel chips do a sort of Just in time "compiling" (more of a CISC=>RISC-micro-op re-encoding) where the micro-ops are the things that end up in cache. In short, what's in cache isn't your program, but a changed view of it, so you couldn't use it as a memory representation of your program any more.

The question is why. Other than "because I can" there's not a lot of reason for this. The Cache system gets you most of the speed benefit with a lot less of the cost. And remember cost isn't just dollars.... SRAM takes more transistors, which means more electricity.

Rich Homolka

Posted 2013-03-25T00:08:42.777

Reputation: 27 121