What is the "low memory warning" threshold with 16GB x64 windows?

-2

1

EDIT4 We now have an answer. Thank you for everyone participating in the test, esp Jamie. Since the answer was deleted, here's the short summary: Win10 introduces memory compression, making this kind of testing difficult and partially pointless. If (on Win8 x64) you try to disable pagefile and write a test app to allocate memory, you'll likely run into allocation fail long before the core is exhausted (Out of CC). What Jamie did was write an app to perform millions of small allocations that did in fact succeed of using every last scrap of ram with no low memory warning. So the mechanism simply does not exist on Win 8 anymore, if you disable pagefile, the first warning you will get is a crash.

WRT failing "normal size" memory allocation while having plenty of CC left is probably due to fragmentation.


With 8GB or 6GB windows 8.1 x64 machine you get a low memory warning if your free RAM drops below about 20% of the total system RAM amount (1.6GB and 1.2GB, respectively) AND there is no more space in the pagefile. If pagefile space is available, physical memory will be allocated into pagefile to keep the 20% or RAM in reserve. So if you're playing Skyrim with a lot of mods and get a low memory warning, you'll probably see the pagefile being completely full and a bit under 20% of RAM being available.

Has anyone tried what is the limit with a 16GB windows machine? Does it extend with no limit i.e. you would receive a low memory warning at 3.2GB?

The easiest way to try this is to disable pagefile altogether or set it to a low value (like 1GB) and then start several apps with high memory use and/or just use this little utility: http://www.soft.tahionic.com/download-memalloc/

I'd test this myself but I have no access to a PC with 16GB (or more!) ram.

In Win8.1 the actual memory use figure is a bit harder to see as performance monitor does not show you pagefile usage. But task manager gives you the "committed" value that shows total memory in use (including pagefile)

Edit: Process explorer system information is probably the best to monitor how memory is being used. Commit charge and limit is the relevant bit here, if you have no pagefile, Commit limit = RAM and you should get a low memory warning when you get ~81% commit charge.

Edit2: To make it even more unambiguous, here's a pseudocode for the two cases I'm asking about

Case A no limit how large the minimum free memory (available commit charge) can grow before warning is issued:

if (CC/CL) > 0.8 then print "low memory warning"

Case B The minimum free memory (available commit charge) is limited to some absolute value and no warning is issued before it is crossed:

if (CC/CL) > 0.8 and if (CL-CC) < 2048MB then print "low memory warning"

Edit3: It turns out Windows 10 compresses memory when it runs low enough on actual RAM. This naturally makes this test more difficult to perform. You'd can still exhaust the available ram to be sure but windows will compress malloc with zero values quite efficiently. In Win8.1 x64 and earlier it's a simple task.

update

I'm currently having the misfortune to have to use 4GB Windows 7 x64 box. In this system Windows tries to keep ~800MB of physical memory available. This is of course the familiar 20% slice. And it hurts a lot worse than the 1.6GB "reserve" on 8GB box.

I see moderator deleted my answer where I summarized Jamie's findings using a bespoke program written to exhaust the core. Thanks for that.

Barleyman

Posted 2015-12-03T17:12:32.980

Reputation: 271

Downvoted because you know the answer, you just don't like it. The correct answer is not going to change, and you've already been given links to tons of evidence that you can get a low memory warning now matter how much RAM is free. – David Schwartz – 2015-12-03T20:56:23.247

Presume I know about "commit charge". – Barleyman – 2015-12-04T13:10:17.360

And actually I have no idea. Hence the question if the commit charge threshold is capped or not. To make it unambiguous case A : if (CC / CL) == 0.8: print "low memory warning" case B :
if (CC / CL) == 0.8 and < 2048MB: print "low memory warning"
– Barleyman – 2015-12-04T16:54:26.740

@DavidSchwartz We have an answer now. Which, incidentally, was not what I expected. It's also first piece of "evidence" I have seen as in someone actually went and tried it in a repeatable manner. – Barleyman – 2016-02-05T14:13:58.287

Alas Barleyman has come to the wrong conclusion. Barleyman concluded that the warning doesn't exist as of Windows 8. But it didn't exist before, either. The "low memory" warning has always pertained to commit charge, not RAM. Because of the way the commit limit works, Windows will simply not let programs allocate virtual memory for which there is no physical storage. If too much of the "physical storage" is on disk rather than RAM, the system will slow down, perhaps catastrophically, but there's no warning. The commit limit has always worked this way. – Jamie Hanrahan – 2019-03-06T12:00:13.480

@JamieHanrahan Back after a four years' break? I had no access to Win7 machine at the time so therefore no mention of them. The commit limit is remarkably consistent at when it triggers unless someone writes a synthetic test application like the one you did. There's also the remarkably stable percentage of core that the system wants to keep un-allocated when page file is available, whatever the exact mechanism is. Anyways, thanks for helping to figure it out back in the day, instead of dithering about terminology. – Barleyman – 2019-03-07T12:45:21.567

I think I have finally "grokked" what you're getting at. The question title asked about the "low memory warning' threshold. Well, that is, and remains, about attempts to commit virtual memory beyond the commit limit. But what triggers aggressive shrinkage of process working sets is a different threshold of a different metric entirely (that of available RAM). They just appear related because it's tough to reach the commit limit without also putting a lot of RAM in use. However, I think I can tweak my little random memory-user to do it, just by switching to file mapping. – Jamie Hanrahan – 2019-03-11T21:43:44.970

@JamieHanrahan I believe you already managed to get pretty close to having 100% of memory used so it's indeed possible to do but unlikely to happen in real life use scenario. In any case the Windows 10 memory compression changes the play in any case. – Barleyman – 2019-03-18T14:02:24.500

My thought was not about getting closer to 100% RAM In Use, but to do so without using much commit charge.As for memory compression, it only applies to stuff on the modified page list, and even then, only to stuff that would be backed by the page file (i.e. private commit). It doesn't apply to mapped files, so changing the memory-hog test app to use mapped files instead of pagefile-backed private commit means that memory compression won't be affecting the results. – Jamie Hanrahan – 2019-03-19T05:48:27.760

Answers

-1

There's some misunderstanding here which I'd like to clear up, for the OP's sake.

@David Schwartz's answer, while not complete, is certainly accurate, however I'd like to add to what he said.

@OP in 2011 my employer tasked me with finding an answer to this question.
After some 3 mths of testing hardware and extensive researching I did find it.

It's nothing to do with the page file or application malloc/vmalloc allocation. Mostly the issue is outdated API and some broken D3D implementation.

The really short answer:

WDDM2.0 + D3D11.2 +4GB GPU's

The missing 2/4GB RAM has been reserved for the GPU. CPU cannot touch it, thereore it doesn't exist. Regardless of whther the VRAM is used oor not, it is reseved, and mapped into GPU address space.

RAM which is GPU Reserved doesn't show up against the system Commit Limit, because it's not available to the CPU. Nor does it appear against the Commit Charge, because it's not allocated - only reserved.

^^ Russunovich actually talked about this anomilie in Winternals 7th Edition. It's simply an issue with the resource usage API, nothing more.

I read the book back to front trying to figure out why my missing memory was always equal to the amount of VRAM on the GPU.

Beginning around DX11.2 WDDM 2.0 support unified addressing between CPU RAM & GPU VRAM, meaning the GPU can map RAM into it's own address space for 0 copy paging, tiled resources or buffering.

This is where it all goes south, Dynamic Resource allocation was meant to be supported with 8.1, however it didn't get implemented until W10. . Dynamic Resource allocation is a DX11.x feature which allows the GPU reserved system memory to be dynamically resized and given back to the CPU during gaming. The "dynamic" part never made it, but reserving system memory did.

What happens is: 8GB RAM and a 4GB GPU, 4GB of RAM is sliced of reserved

So, if you have GPU with 4GB VRAM on 8.1, 4GB of systyem RAM is sliced off and reserved for the GPU, leaving only 4GB for the entire rest of the system.

It's fine to run with the Pagefile disabled in 8.1/Dx11, just remember to add some extra RAM depending on how much VRAM you have.

The other irony here is because DX9 is 32bit the games dopn't support over 4GB of address space, lol So 4GB of RAM is reserved but games like Fallout NV can't even make use of it anyway... lmao.

We do quite a bit of platform testing where I am, rule of thumb which I find works is 16GB RAM with a 4GB GPU, that allows ~12GB free for DX12 games which eat RAM.

You could go to W10 (ugh) which doesn't suffer these issues.. :P

Btw there's also a page in the MSDN d#d library which covers the DX9 GPU me

Now, true, when mm notices it's short on RAM, it will try to recover some: by paging out long-idle processes, and also not-recently-accessed pages of all processes. Jamie Hanrahan

That's not entirely correct. An idle process doesn't get paged out, only the Working Set is trimmed (if possible).
Any idle excess pages are then flushed to disk, but there is a min WS size which always resides in RAM.

Trimming the WS is last resort though, a sign of insufficient RAM. Memory normally Mapped/cached files get cleared first, from the Standby List.

Btw on a side note the Standby List consists almost entirely of files cached from the HDD into RAM. Check the cache after a defrag or reading your 200GB Music collection, also there will be no free memory left. :)

OP, if you like I can send some screenshots/results/conclusions from testing games and other apps with, notes etc. Maybe 8-9 games on half as many platforms...... Let me know.

PS All the above I wrote from memory, because all the testing stuff happened 4-5 years ago, it's possible (hopefully not) a couple of minor points I made may not be exactly 100% word for word as written in the quoted sources.

There is something else I forgot to mention which is your question of Free memory vs Available memory. There is a substantial difference between what is Available, and what is Free - I will cover this in more depth when I have time. But rest assured, No Free memory WILL result in severe performance degradation if a memory intensive program such as Skyrim is running with ~25GB worth of mods. Processes on 64bit are limited to 8GB, for the working set, however the total address space available to that one process is 8TB. This is called a section object, and it's how AWE works.

Paging still takes place, but happens entirely within RAM (using pointers I believe). Whenever pages in the standby list are referenced, a pagefault occurs, which is why pagefaults happen with no PF.

Pagefaults occur if a referenced page is in the standby list, the actual location on the HDD or RAM doesn't really come into it...

Also when it comes to disabled pagefile, there is no virtual address space - there is only address space. Pointers are still used but always point to real memory addresses (well ideally, but not always), and commit limit is the same as installed RAM. :)

Robert Fischer

Posted 2015-12-03T17:12:32.980

Reputation: 22

Thank you. This goes to show that argumentum ad autoritam does not just cover all the bases. In any case I don't think this is all there is to it as pre-Win10 you could in fact dig into that last 2GB but it'd get paged out until 2GB of core again was free. You may also take a look at the testing Jamie did, he managed to actually exhaust the entire memory but ONLY with a test app that does thousands of small allocations (and presumably releases). Also – Barleyman – 2017-05-26T21:37:20.683

"memory" as in "core" before a linguist lawyer strikes again here. – Barleyman – 2017-05-26T21:54:41.080

1"Also when it comes to disabled pagefile there is no virtual address space - [...] Pointers are still used but always point to real memory addresses" - I'm sorry but those are absurd claims. Even w/o a pagefile, page tables still exist and address translation is happening. So no, pointers don't contain real memory addresses. (Five minutes with windbg will prove this.) Even w/o a pagefile, plenty of paging to and from disk still occurs, due to mapped files. e.g. code files (exes, dlls, etc.) as well as any file accessed with Read/Write calls, unless opened with FILE_FLAG_NOBUFFERING. – Jamie Hanrahan – 2017-05-27T13:07:49.063

1In fact, with no pagefile, there is typically much more paging since there is no ability to get dirty, unbacked pages out of memory, and thus there is much less room for clean pages. If the working set of clean pages exceeds the remaining RAM, pages of executables will get evicted and reloaded like mad. (For example, you can memory map a 4GB file even if you only have 2GB of RAM. Clearly pointers can't point to real memory addresses there, nor would we want them to since it would waste RAM like mad.) – David Schwartz – 2017-05-28T03:38:16.887

This does not answer the question. This answers a different question, that of why Windows on some configurations refuses to use (some amount) of RAM at all. That is a valid question, but the question here is rather about why and when Windows seems to restrict applications' use of RAM (within the total already seen and usable by Windows) once the "out of memory" warnings appear (which are referring to commit charge usage, not RAM usage). (Windows does not actually do that; the observation that it does shows only coincidence. Neither is triggered by the other.) – Jamie Hanrahan – 2019-03-19T21:57:40.233

3

Unfortunately, there are wrong answers to questions like this one all over the Internet. If you see any answer that doesn't point out the difference between memory that has been allocated and RAM that is being used, that answer will be completely wrong. This memory warning can be produced with any amount of free RAM. You can see reports all over the Internet of people getting low memory warnings even though they have lots of RAM free.

The low memory warning has nothing whatsoever to do with how much RAM is free. You can have lots of free RAM and still get the low memory warning because that RAM is (indirectly) reserved to back allocations that have already been made but not yet used the RAM and cannot be used to back subsequent allocations.

For example, suppose you have a Windows 8.1 x64 machine with 16GB of physical RAM and no pagefile. Then imagine you run a program that allocates 15GB but doesn't use any of it yet. If the OS allows the allocation, it will begin giving low virtual memory warnings (because it cannot permit allocations of backed memory to succeed) even though almost all of the 16GB of RAM is still free.

You have to be very careful to separate used RAM from allocation requests for virtual memory.

Windows will give you that low memory warning when it may need to fail allocations of virtual memory that might require backing store. This can occur regardless of how much free RAM the system has because that RAM can become constrained due to prior allocations that also might require backing store.

For example, if you perform a normal memory allocation for 8GB but haven't touched that allocation yet, essentially no RAM will be used by that allocation yet. But if you have no page file, 8GB of free RAM now have a constraint that they must remain discardable so that they can be used to back that allocation later, should it need it.

In effect, RAM is like money in the bank and memory allocations are like checks. You can have plenty of money left in the bank, but you may be unable to write any more checks because people might cash checks you've already written. A person can be unable to buy anything no matter how much money they have left in the bank. (Page files are like a line of credit in this analogy.)

It's not possible to understand how memory works on Windows in terms as simple as those in your question. You have to understand the distinction between allocating memory and using RAM.

That said, there is some threshold, possibly a fraction of total RAM, that triggers this warning. But it has nothing to do with whether free RAM is less than that threshold.

David Schwartz

Posted 2015-12-03T17:12:32.980

Reputation: 58 310

2

If anyone wants to see a good "in-depth" answer explaining the nitty gritty why this happens, see http://superuser.com/questions/943175/windows-says-ram-ran-out-while-there-is-still-4-gb-of-physical-memory-available/943185#943185

– Scott Chamberlain – 2015-12-03T20:47:50.903

Scott, that was pretty decent overview. I should check commit limit, it's been a little while since the last time looked into this. That being said, that 20% unused ram block is there when you have adequate page file space available. Wrt David, I see he's progressing by admitting such threshold may in fact exist. I have no interest arguing about it, merely I'd like to know if it's actually capped to some absolute value. – Barleyman – 2015-12-04T12:39:19.537

What "20% unused RAM" are you talking about? Are you saying you've seen low memory warnings even with adequate pagefiles just because RAM was nearly full? (That would be bizarre because most systems have their RAM nearly full most of the time.) – David Schwartz – 2015-12-04T12:40:59.967

There must be some threshold at which the low memory warning is issued. That threshold, even though it's not a measure of RAM, may numerically be set to some fraction of total RAM available. But it has nothing whatsoever to do with how much RAM is free. (Something you still don't seem to understand no matter how many times it's explained to you because you still think this has something to do with how much unused RAM there is.) – David Schwartz – 2015-12-04T12:48:32.250

Scott, I just went and looked into it. Commit limit is simply RAM + pagefile. So the relevant metric here is the commit charge.

When I'll have a minute I'll check on my Windows 10 box if it still hits you with low memory warning when you get within <80% of commit limit. Plus I'll check if things get pushed into pagefile to keep that last 20% of RAM available.

Still, it'd be nice if someone with more RAM would check where that threshold is. Maybe Microsoft decided 2GB is enough buffer. Maybe they think 4GB is all anyone needs. Maybe there's no limit and it grows to over 6GB with 32GB RAM. – Barleyman – 2015-12-04T13:06:08.877

What do you mean by "enough buffer"? And what do you mean by RAM that's kept "available"? – David Schwartz – 2015-12-04T14:22:34.313

@ScottChamberlain Let's notify you properly.. – Barleyman – 2015-12-04T16:58:50.067

Yeah, I added an answer to the old question about Win8 x64 memory usage behavior. Not surprisingly, it's the same as Win7 x64 but I never got around to veryifying that before.

Instead of petulantly downvoting me, perhaps you'd check what is the low memory threshold with whatever big RAM box you have. It'd be a little rich to get low memory warning with 6.4GB commit charge space remaining on a 32GB machine with no pagefile but I wouldn't be surprised. – Barleyman – 2015-12-11T16:07:12.923

@Barleyman It's like this: if (CC + a new allocation request) > CL then show the warning (and reject the request). It's that simple. There is no "2 GB buffer" or "20% margin" or any such thing. Note: When you see the warning, the request has already been rejected, so the CC is the same as it was before the request. When you look at the system after the warning appears, you have no way of knowing how much the request was for, so the reason for the warning will likely not be apparent. "but, CC is less than CL!" Right, but it woudln't have been had the request been approved. – Jamie Hanrahan – 2016-01-31T00:40:11.357

@JamieHanrahan No. This would crash programs with no warning whatsoever issued. This is the behavior you get if you disable the low memory warning entirely, which is not a good idea for the behavior you (incorrectly) describe as normal Windows operating condition, Instead a warning is issued before bad things happen and rightly so. Whether the warning threshold is sensible is entirely different discussion but that's not what the question you downvoted was about at all. – Barleyman – 2016-02-01T18:47:24.633

@Barleyman The question was about something does not exist, a warning if free RAM drops below some threshold. – David Schwartz – 2016-02-01T18:53:01.903

@Barleyman No. The "normal condition" is that pagefile expansion is enabled, which makes the "commit limit" a soft limit as long as the max PF size is not exceeded. When an alloc request would exceed the current limit, which is based on the PF size at the time, you get a pop-up warning - but the request succeeds if the PF can be expanded by enough to allow it. When an alloc request would require the PF to be expanded beyond the maximum, you get another warning, and the alloc request simply fails. The program dies too unless it properly handles that case... which is not hard to do. – Jamie Hanrahan – 2016-02-02T00:06:58.563

1@Baryleyman you need to understand that "running out of RAM" (just RAM, not CL) is not a fatal error to anything (well, aside from some kernel allocations that need nonpageable memory) as long as there is enough backing store (disk) available. All it does is slow things down. You can VirtualAlloc 32 GB on a 4 GB RAM system as long as you have about 28 GB of pagefile. It may not even be all that slow, provided not much of that space is actually being accessed often. – Jamie Hanrahan – 2016-02-02T00:10:03.563

1@Barleyman Oh, and re: "In this case more than half of the remaining commit charge resides in RAM." Not sure how you can say that. Although RAM "in use" is greater than half of the commit charge, there's a lot more in RAM than just the resident portion of committed virtual memory, and there's nothing there that shows the actual pagefile usage. – Jamie Hanrahan – 2016-02-02T01:30:36.540

With or without a pagefile, RAM reserved to cover commit charge can still be (and typically is mostly) used to store clean pages (information read from disk), since those can always be discarded if the RAM is needed for some other purpose. – David Schwartz – 2016-02-02T15:03:05.180

@DavidSchwartz For sure. This is definitely trivia for most usesrs but it doesn't make it invalid question on how some inner mechanisms work. Windows 10 definitely has made some changes on this. I had a friend try this just now with a 32GB box but with windows 10 memory compression feature it's not straightforward to try anymore. I'm sure you'd like to engage in an endless pointless debate whether THAT exists. Memory allocation will get really sloooooow when you approach the limit. So effectively you now get unusably laggy PC whether you overallocate pagefile or RAM. – Barleyman – 2016-02-03T11:18:27.807

We both also know enough to actually use (within the memory leak software) the allocated memory to prevent overallocation by OS. – Barleyman – 2016-02-03T11:19:56.697

"Unusably laggy" would probably be relative. If you're using memory-leaking browser, this would happen slowly over time, instead of sitting staring at the screen how your memory leak application is getting slower and slower with tens of gigabytes of RAM being compressed. – Barleyman – 2016-02-03T11:27:17.713

@JamieHanrahan "You need to understand that "running out of RAM" (just RAM, not CL) is not a fatal error to anything (well, aside from some kernel allocations that need nonpageable memory) as long as there is enough backing store (disk) available." How is this relevant for a test case where we disable the pagefile? Also, in the answer to the win8.1 memory usage pattern, pagefile size is restricted here, CL cannot grow. – Barleyman – 2016-02-03T16:45:16.090

@Barleyman The question isn't invalid. But your answer is. There is no "20% margin" or "2 GB margin" or anything of that nature in the allocation request path (which is where that pop-up happens). It is as I described it. // Windows does not "overallocate", the amount of an allocation request is determined by each program. // Without a PF, the CC+request vs CL test makes sure there is enough RAM for all commit, and mapped files provide the needed backing store for everything else. – Jamie Hanrahan – 2016-02-03T18:56:28.460

(contd.) Now, true, when mm notices it's short on RAM, it will try to recover some: by paging out long-idle processes, and also not-recently-accessed pages of all processes. So you can sometimes see RAM usage peak, maybe all the way to 100%, shortly followed by a reduction as the "balance set manager" thread kicks in. But that's not in the allocation request path and has nothing whatsoever to do with the "out of memory" pop-up. Nor will it prevent future such pop-ups (or alloc failures) because it doesn't affect the CL at all. And the CL is the only thing that limits VirtualAlloc requests. – Jamie Hanrahan – 2016-02-03T19:05:36.350

@JamieHanrahan It's not "out of memory" pop-up, it's low on memory pop-up. I have had many programs crash on "out of memory error", some I have even written myself. And I have allocated memory on low memory condition too when there's a memory leak. Most modern OSs do not actually allocate actual available memory until you do something with it, like write to it or if it's zeroed by compiler. If you can demonstrate there is no warning threshold at all and your test program immediately crashes when you see the warning, fine. Post the data and I'll accept it as a result provided it's repeatable. – Barleyman – 2016-02-03T19:53:54.043

Unless it does something like allocates 8GB chunks in one go of course. – Barleyman – 2016-02-03T19:55:55.500

@Barleyman Both popups come from the same CC vs CL check. "low on memory" means pagefile expansion is enabled and possible (there's enough free space on the disk) and the requested allocation succeeded. "out of memory" happens if PF expansion is disabled or can't help. Neither warning means the OS has killed a process, only that an attempted alloc failed. The program is supposed to test the return from malloc or new or VirtualAlloc or whatever and do something other than blindly reference the region it thought it allocated. If it doesn't. that is its own fault. – Jamie Hanrahan – 2016-02-03T20:16:35.360

1Regarding "allocating actual memory", you're (again) confusing virtual address space with RAM. Whether or not a VirtualAlloc succeeds, and whether or not you see the popups, has nothing to do with free RAM, only with CC (private virtual bytes) vs. CL. CL is simply PF size + total RAM, not free RAM. RAM is only allocated to the process when a page is faulted in ("demand paging") but all of committed virtual address space ("commit charge") is committed once the call succeeds. The CC vs CL check makes sure there will always be some place to keep committed VM - either in RAM or in the PF. – Jamie Hanrahan – 2016-02-03T20:18:19.537

Let us continue this discussion in chat.

– Barleyman – 2016-02-04T13:32:59.190