Why does RAM have to be volatile?

90

26

If computer RAM was to be non volatile like other persistent storages, then there would be no such thing as bootup time. Then why is it not feasible to have a non volatile ram module? Thank you.

Chintan Trivedi

Posted 2013-08-30T10:03:53.463

Reputation: 833

With recent advances, NVRAMs e.g. PCRAM, STT-RAM and ReRAM hold the promise of replacing DRAM, as they have high-performance, low energy consumption and higher endurance than Flash. So, RAM doesn't have to be volatile. See my survey paper.

– user984260 – 2015-08-04T19:51:05.840

samsung M.2 SSD offer 2.5GB/s read 1.5gb/s write, it may not be the answer to the question, but it shows how close we are getting to RAM speed. – cybernard – 2017-01-25T04:14:25.087

Now you have non-volatile RAM – phuclv – 2018-06-30T09:26:54.283

2This question deserves a full answer but T think non-volatile memory is much slower. – mveroone – 2013-08-30T10:06:58.557

1

I know electrons leek out of ram and electricity needs to be periodically added to it. This is what is measured in the refresh rate. http://www.computerhope.com/jargon/r/refresh.htm

– Celeritas – 2013-08-30T10:10:40.167

12What made you think it does have to be volatile?? It wasn't 40 years ago. – Daniel R Hicks – 2013-08-30T11:30:27.773

20RAM is volatile not because it have to be volatile, it's because the technology it used is volatile. – Alvin Wong – 2013-08-30T13:32:52.393

1

I'll just leave this here....http://en.wikipedia.org/wiki/Resistive_random-access_memory

– MirroredFate – 2013-08-30T15:50:14.410

@alvin of course that just brings up the follow up question "why does RAM use a volatile technology?" – jhocking – 2013-08-30T17:03:05.320

@jhocking -- Obviously because of Moore's law. Nothing stands still. – Daniel R Hicks – 2013-08-30T17:15:45.540

Because a non-volatile RAM would be called SSD. – Lie Ryan – 2013-08-30T19:36:04.713

8@jhocking because no non-volatile technology of comparable performance is available. – Dan is Fiddling by Firelight – 2013-08-30T20:33:02.363

2Assume I ask this 2 years from now: Why can't you remember what the last flavor of soda that you drank was before you asked this question? – Erik Reppen – 2013-08-31T02:51:54.160

On second thought... How is RAM useful if it ISN'T volatile? – Erik Reppen – 2013-08-31T02:59:43.720

2@ErikReppen - "Volatile" (in the sense used with RAMs) is different from "writable". "Volatile" means that the data will go "poof" after a (relatively brief) period of time if nothing is done to preserve it. Of course, all memory media are "volatile", but some over a period of decades while others in less than a second. – Daniel R Hicks – 2013-08-31T11:52:09.567

Answers

114

When most people read or hear "RAM", they think of these things:

Two SDRAM sticks, courtesy of Wikipedia

Actually these are made of DRAM chips, and it's controversial if DRAM is a kind of RAM. (It used to be "real" RAM, but technology had changed and it's more of a religious belief if it's RAM or not, see discussion in the comments.)

RAM is a broad term. It stands for "random access memory", that is any kind of memory that can be accessed in any order (where by "accessed" I mean read or written, but some kinds of RAM may be read-only).

For example HDD isn't a random access memory, because when you try to read two bits that aren't adjacent (or you're reading them in reverse order for whatever reason) you have to wait for the platters to rotate and the header to move. Only sequential bits can be read without additional operations in between. That's also why DRAM can be considered non-RAM - it's read in blocks.

There are many kinds of random access memory. Some of them aren't volatile and there are even read-only ones too, for example ROM. So non-volatile RAM exists.

Why don't we use it? Speed isn't the biggest problem as for example NOR Flash memory can be read as fast as DRAM (at least that's what Wikipedia says, but without citation). Write speeds are worse, but the most important issue is:

Because of the inner architecture of non-volatile memory, they have to wear out. The number of write-and-erase cycles is limited to 100,000-1,000,000. It looks like a great number and it's usually sufficient for non-volatile storage (pendrives don't break that often, right?), but it's an issue that already had to be addressed in SSD drives. RAM is written way more often than SSD drives, so it would be more prone to wearing.

DRAM doesn't wear out, it's fast and relatively cheap. SRAM is even faster, but it's also more expensive. Right now it is used in CPUs for caching. (and it's truly RAM without any doubt ;) )

gronostaj

Posted 2013-08-30T10:03:53.463

Reputation: 33 047

34+1 for being with the 0.1% of people rightly stating ROM is also RAM ! (stating D-RAM is not RAM is a little extreme though ...) – jlliagre – 2013-08-30T11:04:20.343

11But the original disk drives were referred to as "RAM" (since the other alternative was tape). If history determines precedence, DASD (what you young'ins refer to as HDD) is definitely RAM. – Daniel R Hicks – 2013-08-30T11:33:06.160

18@DanielRHicks That's interesting. Maybe "RAMiness" isn't binary: DRAM is less random than SRAM, HDDs are less random than DRAM and so on. – gronostaj – 2013-08-30T11:44:36.343

2The introduction of this answer might be somewhat extreme, but justifyable. As opposite to 'sequential access', RAM has always been a misnomer; there is no randomness involved. I'd argue all commonly accepted definitions are equally wrong, but using them interexchangably would only add to the confusion. This answer clearly announces what definition it subscribes to. I think this is necessary more than it is pedantic. – Marcks Thomas – 2013-08-30T13:59:06.763

5This is not pedantry, this is nonsense. He explains that DRAM is not RAM because HDDs are not. It is not justifiable, it is nonsene. Also, concentrating eclusively on SSD wear out, he neglects the first important aspect of RAM - the fast write capability. SSD write in blocks and very slowly. That is why they suck in the first place. Let the wear out alone. BTW, SSD is really block access. THis answer puts everything upsidedonwn and in confusion for uncertain reasons. This answer is opposite to pendantism because pedantism = order. – Val – 2013-08-30T14:22:20.693

11if you call random access any memory where accessing a random spot takes only O(1) time in terms of size regardless of the current state then DRAM is random access, a HDD has access in O(#tracks+rotation_time) which varies for size – ratchet freak – 2013-08-30T14:45:32.097

@Val Maybe I haven't stated my point clearly, my thought process may be chaotic and English isn't my first language. I said that HDDs are an example of non-RAM and I've used them to explain why. Then I stated that the same reasoning applies to DRAM, no mixing between those two. Now, if wearing is problematic with SSDs, then it's much more serious with RAM (if P/E limit can be hit in basic usage, it will fail in more challenging conditions). I could stand slower write speeds, but not replacing memory every few months, so IMO it's more important. – gronostaj – 2013-08-30T14:50:34.583

Common DRAM has been grouped in "blocks"/"pages" at least since Fast Page Mode DRAM, which dates back to 1992 or so. So he does have a real point. In modern memory the relative speeds differ even more, as most pages are powered down when not recently accessed. – MSalters – 2013-08-30T15:13:13.450

You used "obviously" in the sense "it is hard to explain why". It is a bad explanation. I did some dram access and I do not remember that it is serial. You may consider it parallel, if consider the internal workings. But it is not the same as serial, what you claim anyway. When you make strong statement, using "it is the same" is not enough especially because it is not the same at all. – Val – 2013-08-30T15:16:43.797

Also, from the user point of view, the disk is exposed as block-access device whereas RAM is truly random access. It satisfy every definition of randomness: you address specific byte, you do not care about the neighbors and and access is immediate - every clock cycle a new memory cell is accessed. This is nothing like disk. – Val – 2013-08-30T15:31:48.590

2@jlliagre: His definition of RAM "any kind of memory that can be read or written in any order" definitely does not include ROM. ROM cannot be written in arbitrary order. It can't be written at all (some varieties can be programmed, but that's very different from a memory write). – Ben Voigt – 2013-08-30T15:35:05.217

8

"RAM" was I believe (I can't find a good reference) derived in opposition to sequential memory (magnetic or paper tape; mercury delay lines) which could only be accessed in order. Meanwhile, I found a digression on terms for "RAM" in other languages: http://www.smo.uhi.ac.uk/~oduibhin/tearmai/etymology.htm which emphasise different aspects of the RAM/ROM difference.

– pjc50 – 2013-08-30T15:35:06.500

@pjc50 Yes, user perspective is important. That is why saying that "RAM is the same as serial access" is not acceptable and makes no sense. – Val – 2013-08-30T15:37:39.313

5The part about HDDs not being RAM is interesting. On the one hand, what are you nuts‽ of course they are!, after all, CDs/DVDs and HDDs are clearly RAM because unlike tape, you don’t have to wait for it to go through everything in-between to get to the part you want. On the other, *um, yes, you actually do* have to go through everything in-between (albeit much faster) because as you said, the head/laser has to seek (unless the files are contiguous of course). So it’s amusing (and frustrating) that industry terms (including old, well–worn-in ones) can still be ambiguous and inconsistent. – Synetech – 2013-08-30T16:12:00.827

@pjc50 -- You are quite correct. "random access" is as opposed to sequential access such as tapes. See the RAMAC for an early (but not the earliest) reference.

– Daniel R Hicks – 2013-08-30T17:19:52.867

1@Val It depends where user sits. From POV of program the file (disk) can be randomly accessed - that is not the same as POV of os. On the other hand from POV of memory controller the RAM is read in burst and usually multiple burst are read for efficiency (I believe it has something to do with parallelism of addressing in DRAM but I may be wrong). Given that DRAM can be used in FPGA there memory controller can be 'an user'. – Maciej Piechotka – 2013-08-30T17:33:11.223

3Not all non volatile memories have to wear out; see the ancient core memory, and IBM has been saying for a few years now that they are developing a modern version of that. The correct answer is that they don't have to be, just current cost-effective technology is because it is based on capacitors, which leak, but can be made very small/dense. Also DRAM is not read in blocks. You have to open a row before you can read or write it, but once opened, you can read or write a single byte then close it. – psusi – 2013-08-31T00:14:08.983

SSD's have no moving parts - why should they wear out more than "volatile" memory? – Vector – 2013-08-31T07:57:13.860

2@Vector - Some non-volatile memory designs undergo an actual physical change when a location is written or erased -- effectively melting and refreezing a little "blob" of something in a different configuration. With each change the "blob" gets a little more disorganized, until it no longer can be reliably switched back and forth. – Daniel R Hicks – 2013-08-31T11:46:26.793

um.. and where would I buy SRAM to put into my desktop? Can't find anything – Gizmo – 2013-08-31T15:14:21.317

@Gizmo, you don't... back in the 8086 days you could, and in the 386 days you could install it in special slots on some motherboards to be used as high speed cache for the slower and larger DRAM, but these days it's only found in the cache built into the CPU. DRAM prices and densities are just so much higher that nobody uses SRAM any more. – psusi – 2013-08-31T18:55:16.647

@Vector, flash takes high voltage to change its state. High enough that it damages it a little bit each time you do, so eventually it burns out. – psusi – 2013-08-31T18:56:21.980

1mhm well would be cool to have 8GB of SRAM instead of normal DDR RAM for memory – Gizmo – 2013-08-31T19:00:50.627

In a typical 1980's computer, a DRAM read cycle would have each chip copy an entire row of bits to a buffer (erasing it from within the main store in the process), read a bit from the buffer, and then rewrite the whole row from the buffer. A write cycle would copy the entire row to the buffer, change a bit, and write it back. The fact that every read or write required the entire row to be read or written wasn't relevant to the circuitry that was using the chip. Later computers exploited the fact that multiple bits within a row could be read or written more efficiently than... – supercat – 2013-09-01T20:33:07.133

...could be bits on different rows; as computers started exploiting this, manufacturers increased their focus on improving the efficiency of larger transfers on a single row. Although the common usage mode shifted from manipulating a single-bit per row per operation (a 16-bit write would do one bit on each of 16 independent rows), it wouldn't make sense to stop referring to a chip as "random access" just because the speed of fully-random accesses didn't improve as much as the speed of same-page accesses. – supercat – 2013-09-01T20:38:38.723

2

@Gizmo note that SRAM capacity per unit area is much less: http://www.digikey.com/catalog/en/partgroup/ddr-ii-xtreme-sram/34225 , so to get 8Gb you'd need several square feet of chips, at which point the wiring latency kills the speed advantage.

– pjc50 – 2013-09-02T13:34:27.960

1finally some sources! Thank you @pjc50 – Gizmo – 2013-09-02T15:11:44.057

That's a source for chips only, not suitable for fitting in your PC. – pjc50 – 2013-09-02T15:16:50.337

Very interesting and well described answer. Shame it does not answer the question. No mention of NVRAM! – geezanansa – 2013-09-04T09:21:26.643

144

Deep down it's due to physics.

Any non-volatile memory must store its bits in two states which have a large energy barrier between them, or else the smallest influence would change the bit. But when writing to that memory, we must actively overcome that energy barrier.

Designer have quite some freedom in setting those energy barriers. Set it low 0 . 1, and you get memory which can be rewritten a lot without generating a lot of heat: fast and volatile. Set the energy barrier high 0 | 1 and the bits will stay put almost forever, or until you expend serious energy.

DRAM uses small capacitors which leak. Bigger capacitors would leak less, be less volatile, but take longer to charge.

Flash uses electrons which are shot at high voltage into an isolator. The energy barrier is so high that you can't get them out in a controlled way; the only way is to clean out an entire block of bits.

MSalters

Posted 2013-08-30T10:03:53.463

Reputation: 7 587

12Great answer! You actually answered the why of it and in an easy to understand way no less. – Synetech – 2013-08-30T16:12:43.750

10The accepted answer doesn't actually answer the question, whereas this one does. – Mark Adler – 2013-08-31T17:01:28.003

1You probably avoid mentioning this because it's too "deep down in physics", but I'd like to say that the barrier is less about energy than entropy. SRAM has even smaller capacitors than DRAM and yet doesn't leak, because it uses field-effect transistors instead of resistors – which, vaguely speaking, bypass interference from thermal noise via an externally supplied voltage threshold. Only a few die shrinks into the future will we reach another type on interference – quantum tunnelling – where an actual energy barrier will be the only way to preserve classical information. – leftaroundabout – 2013-09-02T21:44:46.210

@leftaroundabout: SRAM doesn't have capacitors at all, except parasitic and perhaps some research designs. – MSalters – 2013-09-03T06:50:26.967

@MSalters parasitic or not, any FET has a capacitance. My point is, it's not actually necessary to store much energy in a capacitor, or an inductor or anything, to preserve information over a long time. – leftaroundabout – 2013-09-03T07:03:18.027

1@leftaroundabout: Neither SRAM nor DRAM can store a bit for a longer period of time without some form of refreshing that bit (turning a 0.2 back into a crisp 0 bit). SRAM just does that continuously whereas DRAM does it in a rewrite cycle. – MSalters – 2013-09-03T07:12:44.387

Perhaps adding some relevant info regarding NVRAM may help provide an even better answer. Is CMOS at type of non volatile memory albeit ROM? – geezanansa – 2013-09-04T09:24:25.220

@geezanansa: CMOS is a very common IC technology (Complementary MOS, a mix of PMOS and NMOS, which are Positive and Negative Metal On Silicon). – MSalters – 2013-09-04T09:55:20.173

@MSalters: Some guys compare technologies of memories (including volatile and non-volatile) using only their bandwidths, rarely their latencies. Your response at least suggests the latency properties by the physis. Maybe you could improve it detailing the issue of latencies. – Luciano – 2013-09-05T20:11:07.780

23

It should be noted that the first commonly-used "main store" in computers was "core" -- tiny toroids of ferrite material arranged in an array, with wire running through them in 3 directions.

To write a 1 you'd send equal strength pulses through the corresponding X and Y wires, to "flip" the core. (To write a zero you wouldn't.) You'd have to erase the location before writing.

To read you'd try to write a 1 and see if a corresponding pulse was generated on the "sense" wire -- if so the location used to be a zero. Then you'd of course have to write the data back, since you'd just erased it.

(This is a slightly simplified description, of course.)

But the stuff was non-volatile. You could shut down the computer, start it up a week later, and the data would still be there. And it was most definitely "RAM".

(Before "core" most computers operated directly off a magnetic "drum", with only a few registers of CPU memory, and a few used stuff like storage CRTs.)

So, the answer as to why RAM (in it's current, most common form) is volatile is simply that that form is cheap and fast. (Intel, interestingly enough, was the early leader in developing semiconductor RAM, and only got into the CPU business to generate a market for their RAM.)

Daniel R Hicks

Posted 2013-08-30T10:03:53.463

Reputation: 5 783

Were core-based computers typically designed so that after an unexpected power failure they could (when power was re-applied) resume operation where they left off? My conjecture would be that if one performed a "shutdown" procedure one could have a system save everything of interest into core and then start executing NOPs until power was removed; if one used the proper procedure when restarting, one could then restore the system state. Do you know if systems typically had a means of autonomously triggering a shutdown procedure if external power was lost? If a core-based system were... – supercat – 2014-11-02T16:08:40.917

...to cease functioning due to power failure and didn't get a chance to finish up any operations that were in progress before power was lost completely, I would expect that whatever unit of memory was being acted upon would be lost; further, since I would expect that program counters, sequencers, etc. would not be kept in core memory, the contents of those would be lost as well. – supercat – 2014-11-02T16:12:21.443

@supercat - There were a wide variety of designs. Mainly the effort centered around maintaining the integrity of the file system, so crash recovery was most likely to try to find file operations that were in progress and complete those. But I'm remembering that it was fairly common to detect a power failure and stash the CPU registers. – Daniel R Hicks – 2014-11-02T19:33:28.297

If the memory is being used as a file system, I would expect that code could ensure that it would always be a in a valid state, such that any interrupted operation could be either rolled back or run to completion. On the other hand, by my understanding core memory was often used not because it was non-volatile, but rather because it was cheaper than any alternatives, so I'm curious to what extent designers took advantage of non-volatility or just ignored it. – supercat – 2014-11-02T19:37:57.420

@supercat - They took advantage of it quite often (and hence, eg, file systems were less robust than one would have liked for volatile RAM). Not that it was a big "selling point", but it was there, so why not? – Daniel R Hicks – 2014-11-02T19:43:14.493

18

DRAM is fast, can be built cheaply to extremely high densities (low $/MB and cm2/MB), but loses its state unless refreshed very frequently. Its very small size is part of the problem; electrons leak out through thin walls.

SRAM is very fast, less cheap (high $/MB) and less dense, and does not require refreshing, but loses its state once the power is cut. The SRAM construction is used for "NVRAM", which is RAM attached to a small battery. I have some Sega and Nintendo cartridges which have decades-old save states stored in NVRAM.

EEPROM (usually in the form of "Flash") is non-volatile, slow to write, but cheap and dense.

FRAM (ferroelectric RAM) is one of the new generation storage technologies that's becoming available that does what you want: fast, cheap, nonvolatile...but not yet dense. You can get a TI microcontroller that uses it and delivers the behaviour you want. Cutting power and restoring it allows you to resume where you left off. But it only has 64kbytes of the stuff. Or you could get 2Mbit serial FRAM.

"Memristor" technology is being researched to deliver similar properties to FRAM, but is not yet really a commercial product.


Edit: note that if you have a RAM-persistent system, you either need to work out how to apply updates to it while it's running or accept the need for the occasional restart without losing all your work. There were a number of pre-smartphone PDAs which stored all their data in NVRAM, giving you both instant-on and the potential instant loss of all your data if the battery went flat.

pjc50

Posted 2013-08-30T10:03:53.463

Reputation: 5 786

@user539484

Nice catch! But, not quite sure which memory type you mentioned. I think you were referring to what RBerteig mentioned - battery-backed (BBSRAM)? Correct me if I mixed it up with something else... – None – 2017-01-25T00:19:14.403

Yay memristor technology, it will be at least 10 yrs or more before we see cool products based on these "new" devices. But they should hold a ton of promise for memory implementations. – Chris O – 2013-08-30T16:14:48.547

DRUM is fast, but not very dense, and the cost per character is high. (What?? DRAM??? Never mind.) – Daniel R Hicks – 2013-08-30T17:22:54.097

1NVRAM is not the same as battery backed SRAM. NVRAM has a capacitor per bit that can be sufficiently insulated that any charge does not leak away, but can also be sensed, and programmed. The bit cell structure is fairly large, and in some technologies involved more exotic fab steps, so NVRAM is a low density high cost technology. But it also has very long storage lifetime. CMOS SRAM draws very little power when idle, and so backing it up with a battery is cost effective. The once common PC "CMOS" device is one example. – RBerteig – 2013-08-30T19:24:10.860

1SRAM+battery assembly is not a true NVRAM. True NVRAM built on EEPROM. – user539484 – 2013-08-30T23:13:30.020

@RBerteig: My understanding is that an NVRAM is a marriage of an SRAM with a non-volatile store and a large enough energy storage medium to allow the SRAM to be copied to the non-volatile store without extermal power. If the SRAM and non-volatile store were in separate chips, transferring one to the other would take awhile (and consume a lot of energy). Marrying them together allows the transfer to occur much faster. – supercat – 2013-09-01T20:27:46.843

According to wiki what you describe was called NOVRAM. I've never seen one in the wild. Popular devices in the 80s were serial EEPROMs with a few 100s of total bits based on a floating gate technology, using large geometry to get good lifecycle times. EEPROM evolved into FLASH devices, which bifurcate to NAND for capacity and NOR for speed and reliability.

– RBerteig – 2013-09-07T01:00:20.190

6

IMO the main problem here is indeed volatility. To write fast, writing has to be easy (i.e. not require extended periods of time). This contradicts what you'd like to see when selecting RAM: It has to be fast.

Everyday analogy: - Writing something on a whiteboard is very easy and takes little to no effort. Therefore it's fast and you can sketch all over the board within seconds. - However, your sketches on the whiteboard are very volatile. Some wrong movement and everything is gone. - Take some stone plate and engrave your sketch there - like The Flintstones style - and your sketch will stay there for years, decades or possibly centuries to come. Writing this takes a lot longer though.

Back to computers: The technology to use fast chips to store persistent data is already there (like flash drives), but speeds are still a lot lower compared to volatile RAM. Have a look at some flash drive and compare the data. You'll find something like "reading at 200 MB/s" and "writing at 50 MB/s". This is quite a difference. Of course, product price has some play here, however, general access time might improve spending more money, but reading will still be faster than writing.

"But how about flashing BIOS? That's built in and fast!" you might ask. You're right, but have you ever flashed a BIOS image? Booting through BIOS takes just moments - most time is wasted waiting for external hardware - but the actual flashing might take minutes, even if it's just a few KByte to burn/write.

However, there are workarounds for this issue, e.g. Windows' Hybernate feature. RAM contents are written to a non-volatile storage (like HDD) and later on read back. Some BIOS on netbooks provide similar features for general BIOS configuration and settings using a hidden HDD partition (so you essentially skip the BIOS stuff even on cold boots).

Mario

Posted 2013-08-30T10:03:53.463

Reputation: 3 685

5

Mainly because of catch-22. If your DRAM (as said already, RAM is very broad term. What you are talking about is called DRAM, with D for Dynamic) suddenly become non-volatile, people will call it NVRAM which is very different type of storage.

There is also a practical reason, currently no NVRAM (I mean true EEPROM-based NVRAM, with no power source required) types exists which allows an unlimited number of writes without hardware degradation.


Regarding DRAM-based mass storage devices: take a look at Gigabyte i-RAM (note the rechargeable Li-Ion battery, which makes it non-volatile for a while)

a

user539484

Posted 2013-08-30T10:03:53.463

Reputation: 450

3

Actually, RAM doesn't, strictly speaking, NEED to be volatile, but for the sake of convenience we generally make it that way. See Magnetic Ram on Wikipedia (http://en.wikipedia.org/wiki/Magnetoresistive_random-access_memory) for one potential non-volatile RAM technology, though one still in need of further development for practical use.

Basically, DRAM's advantage is size. It's a tremendously simple technology which has very fast read-write characteristics, but as a consequence, is volatile. Flash Memory has OK read characteristics, but is TREMENDOUSLY SLOW compared to what's needed for RAM.

Static RAM has extremely favourable read-write characteristics, and is quite low power, but has a large component count compared with DRAM, and is hence much more expensive. (Bigger footprint on silicon = more failures + lower chip counts per die = more cost) It's also volatile, but even a small battery could power it for some time, making it a kind of psudo-NVRAM if it weren't for the cost issue.

Whether it's MRAM or some other technology, it's likely that at some point in the future, we will find a way around the current need for tiered memory structures which slow down computers, we're just not there yet. Even once that era arrives however, it's likely we'll still need some variety of long term reliable (read: SLOW) storage medium to archive data.

SplinterReality

Posted 2013-08-30T10:03:53.463

Reputation: 370

2

As many others have mentioned, modern RAM is only volatile by design - not by requirement. SDRAM and DDR-SDRAM have the added troubles of also requiring a refresh to remain reliable in operation. That's just the nature of Dynamic RAM modules. But, I couldn;t help but wonder if there is another option available. What types of memory exist that can fit the criteria? In this walk-through, I will only cover memory that can be read/written to at runtime. This kicks out ROM, PROM, and other one-time use chips - they're meant to be unchanging once programmed.

If we inch a bit closer to the non-volatile side of the spectrum, we do encounter SRAM along the way - but its non-volatility is quite limited. Actually, it's just data remanence. It doesn't require a refresh, but it sure will drop its data when the power is off for too long. In addition to this, it's also a bit faster than DRAM - until you reach GB size. Due to the increased size of memory cells (6 transistors per cell), when compared to DRAM, the viability of SRAM's speed advantage begins to fade as the size of the memory in use goes up.

Next up is BBSRAM - Battery Backed SRAM. This type of memory is a modified version of SRAM that uses a Battery to become non-volatile in case of a power failure. However, this introduces some issues. How do you dispose of a battery once its done for? And isn't SRAM by itself already big enough as it is? Adding a power-management circuit and battery to the mix only reduces the amount of space that can be used for actual memory cells. I also don't remember batteries playing nice with prolonged heat exposure...

Further to the non-volatile side of the spectrum, we now lay eyes on EPROM. 'But wait', you ask - 'isn't EPROM one-time use also?' Not if you have a UV light and the will to take high risks. EPROMs can be rewritten if exposed to UV light. However, they are usually packed in an opaque enclosure once programmed - that would have to come off first. Highly impractical, seeing that it can't be rewritten at runtime, in-circuit. And you wouldn't be able to target individual memory addresses/cells - only wipe. But, EEPROM might help...

The EE stands for Electrically-Erasable. That opens the door for write operations occurring in circuit for once (in comparison to ROM, PROM, and EPROM). However, EEPROMs use floating-gate transistors. This leads to a gradual accumulation of trapped electrons, which will eventually render the memory cells inoperable. Or, the memory cells could encounter charge loss. That leads to the cell being left in an erased state. It's a planned death sentence - not what you were looking for.

MRAM is next in the list. It uses a Magnetic Tunnel Junction, consisting of a permanent magnet paired with a changeable magnet (separated by thin insulation layer), as a bit. According to Wikipedia,

" The simplest method of reading is accomplished by measuring the electrical resistance of the cell. A particular cell is (typically) selected by powering an associated transistor that switches current from a supply line through the cell to ground. Due to the Tunnel magnetoresistance, the electrical resistance of the cell changes due to the relative orientation of the magnetization in the two plates. By measuring the resulting current, the resistance inside any particular cell can be determined, and from this the magnetization polarity of the writable plate. "

This form of memory is based upon differences in resistance and measuring voltage, rather than charges and currents. It doesn't need a charge pump, which aids in making its operation less power consuming than DRAM- especially for STT-based variants. MRAM has multiple advantages to its design, including memory density comparable to that of DRAM; performance and speed comparable to that of SRAM in limited test cases; power consumption much lower than DRAM; and lack of degradation due to repeated read/write operations. This has put MRAM in the spotlight for researchers and scientists alike, furthering its development. In fact, it's also being looked at as a possible candidate for "universal memory". However, fab costs for this type of memory are still very high, and popular manufacturers are more interested in other options - ones that look a bit unwieldy at this point.

I could go over Ferroelectric RAM, but it's a rather sad option. F-RAM is similar to DRAM in construction - simply replace the dielectric layer with ferroelectric material instead. It has lower power consumption, decent read/write endurance - but the advantages wane after this. It has much lower storage densities, an outright storage cap, a destructive read process (requiring changes to any IC to accommodate for it with write-after-read arch.), and higher overall cost. Not a pretty sight.

The last options on the spectrum are the SONOS, CBRAM, and Flash-RAM (NAND Flash, NOR-based, etc.). Common SSD-like storage won't cut it though, so we can't quite find any viable options at the end of this spectrum. SONOS and Flash-RAM both suffer the issues of limited read/write speeds (used primarily for permanent storage - not optimized for RAM-like operation speeds), the need to write in blocks, and limited numbers of read/write cycles before saying 'good night'. They may be good for paging, but they sure won't work for high-speed access. CBRAM is also a bit too slow for your purposes.

The future for this hunt looks bleak currently. But fear not - I left a few honorable mentions out for your personal reading. T-RAM (Thysistor-RAM), Z-RAM, and nvSRAM are possible candidates as well. While both T-RAM and Z-RAM need a refresh occasionally (in comparison to DRAM, SDRAM, and DDR-SDRAM), nvSRAM is free of such requirements. All three of these options have either better memory density, better read/write speeds, and/or better power consumption rates. They also don't need batteries - which is a big plus (BBSRAM is crying in a corner). With a closer look at nvSRAM, it appears as though we have found the viable candidate for the dreaded DDR-SDRAM replacement.

But soon (at least for those who chose to read this far), we will all be crying in our own separate corners - in addition to having the same size issues as SRAM, nvSRAM is also not available in large enough modules for use as a suitable DDR-SDRAM replacement. The option(s) are there - but either aren't yet ready for production (like MRAM), or simply never will be (nvSRAM). And before you ask, the Gigabyte i-RAM is out too - it only works via SATA interface, producing a performance bottleneck. It also has a battery. I guess we should all be looking at where memory may be going next? A bitter-sweet end, I suppose.

user446730

Posted 2013-08-30T10:03:53.463

Reputation:

1Why didn't you mention magnetic core memory? :D – Jamie Hanrahan – 2018-10-03T18:12:41.437

@JamieHanrahan Maybe I will :P ... – None – 2018-10-04T14:51:41.980

1When you were talking about Ferroelectric RAM I thought "next is about core"... they even share the destructive read feature! – Jamie Hanrahan – 2018-10-04T15:36:43.447

1

Strictly speaking, RAM does not need to be volatile. Multiple forms of non-volatile RAM was used in computers. Ferrite core memory, for one, was the dominant form of RAM (acting as main storage, from which the processor took information directly) in the '50s up until the '70s, when transistorized, monolithic memory became prevalent.

I believe IBM also referred HDD as random-access storage, as it differed from sequential access storage, such as magnetic tape. The difference is comparable to a cassette tape and a vinyl record -- you have to wind through the entire tape before you can get to the last song, whereas you can simply reposition the pin on any location on the record to start listening from there.

Alex

Posted 2013-08-30T10:03:53.463

Reputation: 11

1

  • Large capacity memories need small individual memory cells. A simple capacitor, which holds a 1 charge or a 0 charge can be me much smaller than complex logic in non volatile ram & faster.

  • Refilling the amount leaked is a hardware independent cycle. This logic is made in such a way that the processor is normally unhindered.

  • Power down on the other hand stops the refreshing. So yes, a total reload is needed, on boot or hibernation.

  • Larger capacity for the same size, wins the vote.

8GB ram = 8.589.934.592 bytes x 8 bits = 68.719.476.736 bits (cells - no parity)

Chris

Posted 2013-08-30T10:03:53.463

Reputation: 31

0

To answer the question- It does not!

Non-volatile random-access memory From Wikipedia, the free encyclopedia Non-volatile random-access memory (NVRAM) is random-access memory that retains its information when power is turned off (non-volatile). This is in contrast to dynamic random-access memory (DRAM) and static random-access memory (SRAM), which both maintain data only for as long as power is applied. The best-known form of NVRAM memory today is flash memory. Some drawbacks to flash memory include the requirement to write it in larger blocks than many computers can automatically address, and the relatively limited longevity of flash memory due to its finite number of write-erase cycles (most consumer flash products at the time of writing can withstand only around 100,000 rewrites before memory begins to deteriorate). Another drawback is the performance limitations preventing flash from matching the response times and, in some cases, the random addressability offered by traditional forms of RAM. Several newer technologies are attempting to replace flash in certain roles, and some even claim to be a truly universal memory, offering the performance of the best SRAM devices with the non-volatility of flash. To date these alternatives have not yet become mainstream.

Source: NVRAM wiki page

geezanansa

Posted 2013-08-30T10:03:53.463

Reputation: 101