First, it is a giant mistake (not yours) that Windows' dialog, the one where you set the pagefile size, equates the pagefile with "virtual memory." The pagefile is merely the backing store for one category of virtual address space, that used by private committed memory. There is virtual address space that is backed by other files (mapped files), and there is v.a.s. that is always nonpageable and so stays in RAM at all times. But it's all "virtual memory" in that, at least, translation from virtual to RAM addresses is always at work.
Your observation is correct: Windows' allocation of pagefile size uses a simple calculation of default = RAM size and maximum = twice that. (It used to be 1.5x and 3x.) They have to set it to something, and these factors provide a result that's almost always enough. It also guarantees enough pagefile space to catch a memory dump if the system crashes (assuming you've enabled kernel or full dumps).
As far as I understood the concept of the pagefile is that Windows only expands it when
needed and starts of with the minimum amount, though it never does, it
always goes into the absolute maximum.
Ah... it starts out with the "initial size". This is NOT the "minimum allowed". So that is why you are seeing it at your RAM size, because Windows uses that for the initial size.
But... are you saying you are seeing the actual pagefile size go to the maximum setting? e.g. if it is set to 16 GB initial, 32 GB max, you're seeing the actual size ("currently allocated") at 32 GB? It should always revert to the initial size when you reboot, btw.
Do you see "system is running low on virtual memory" pop-ups? 'cause you should, when the OS expands the pagefile beyond the current size.
The OS is not going to enlarge the pagefile unless something has actually tried to allocate so much private committed memory that it needs the enlarged pagefile space to store the stuff. But, maybe something has. Take a look at Task Manager's Processes tab. The "Commit size" column shows this for each process. Click on the column heading to see who the hog is. :)
This is extremly annoying
because disabling the pagefile or not setting it to the equal amount
of RAM installed, causes issue in some programs that use lots of
memory like 7zip, they just claim that there is not enough memory to
allocate even though there is plenty enough of free usable memory.
This has to do not with available RAM but with something called "commit charge" and "commit limit". The "commit limit" is the sum of (RAM - nonpageable virtual memory) + current pagefile size. (Not free RAM, just RAM.) So a system with say 8 GB RAM and 16 GB current pagefile would have a commit limit of about 24 GB ("about" because the RAM that holds nonpageable contents does not count toward the commit limit).
The "commit charge" is how much private address space currently exists in the system. This has to be less than the commit limit, otherwise the system cannot guarantee that the stuff has a place to be.
On task manager's Performance tab you can see these two numbers with the legend "Commit (GB)". e.g. I'm looking at a machine that says "Commit (GB) 1/15". That's 1 GB current commit charge out of 15 GB limit.
If a program like 7zip tries to do e.g. a VirtualAlloc of size > the (commitLimit - commitCharge), i.e. greater than the "remaining" commit limit, then if the OS can't expand the pagefile to make the commit limit big enough, the allocation request fails. That's what you're seeing happening. (Windows actually has no error message for "low on physical memory", not for user mode access! Only for virtual.)
It has nothing to do with free RAM, as all RAM (minus the tiny bit that's nonpageable) counts toward the commit limit whether it is currently free or not.
It's confusing because when you look at the system after one of these allocation failures, there is nothing apparently wrong - you look at the system, your commit charge is well below the limit, you may even have a lot of free RAM. You'd have to know how much private committed memory the program had been trying to allocate in order to see what the problem was. And most programs won't tell you.
Sounds to me like 7zip is being far too aggressive at trying to allocate v.a.s., maybe it is scaling its requests based on your RAM size? Are you sure there is no smaller pagefile setting where 7zip would be happy? Are you using the 32-bit or 64-bit version of 7-zip? Using the 32-bit version would fix this, since it can't possibly use more than 2 or maybe 3 GB of virtual address space. Of course it might not be as fast on huge datasets.
This behavior potentially decreases the lifespan of my drives (SSDs) tremendously.
Well, no, not really. Simply putting a pagefile out there of whatever size does not mean the system will actually be writing that much to the pagefile. (Unless you have the option set to "clear pagefile on shutdown", and even then I don't think it writes the whole thing; the Mm knows what blocks are in use and should only write those... I've never thought of that before; I'll have to check.)
If you want to look at how much stuff is really in your pagefile, use the PerfMon utility. There is a counter group for Page file, and you of course want the "% usage" counter. Interpret this percentage per the file's actual size (as shown in Explorer).
It DOES use a lot of space and since space on an SSD is pretty dear, this is a concern for most of us. One thing you might try is putting a reasonable-sized pagefile, say 4 or 8 GB, on your SSD, then attach a spinning rust drive and put a large pagefile on that. Or if you want an SSD and nothing else for your pagefile, buy a cheap small one just for the second pagefile.
32GB of RAM is a crazy amount of RAM. Disabling virtual memory with that much physical memory available is not going to cause problems for programs like 7zip (or really anything for that matter). With 32GB of RAM and even the most demanding of consumer software currently available, you do not need virtual memory enabled. Are you sure you have 32GB? – Jason C – 2014-07-06T14:15:39.190
1Unless you have SSDs the lifespan of your drives is not significantly affected. – Daniel R Hicks – 2014-07-06T14:17:25.767
And if you are seeing "weird behavior" and memory allocation errors in a 32GB system with virtual memory disabled, something else is wrong here that you need to look in to. Check the memory column in the task manager and find out what is taking up all of that memory. Also check the System properties to make sure Windows is actually recognizing all 32GB, and you may also want to run memtest. – Jason C – 2014-07-06T14:18:59.827
I think you need to describe your scenario in more Detail. Yes, 7zip can need large amounts of RAM (around 10GiB for LZMA@1024MB Dict. size) when you push the Dictionary size to the max. I could imagine that if you use the 32bit version of 7zip, that would be an issue. That issue however is not related to the amounts of RAM you have installed but rather the amounts a 32bit Program is able to use. So i recommend you describe the precise scenario you are having the problem in. As for your question: It has always (win3.11?) been that way, which might not be exactly clever for today's world. – TheUser1024 – 2014-07-06T14:31:31.793
Yes I have an SSD(edited), yes I have definitely 32GB of RAM and yes it DOES cause problems, 7zip x64 doesn't work anymore with a 256MB dictionary and 8 threads, it does absolutely fine with the pagefile enabled with the same setting. The same when using RAM disk tools, they complain about not enough RAM even when allocating tiny amounts like 512MB. My memory isn't used up at all in these scenarios (atleast 24GB free ore more) and the RAM isn't bad (tested with memtest86+ in several passes) – PTS – 2014-07-06T14:50:33.550
I want to dispel a misconception, "This behavior potentially decreases the lifespan of my drives (SSDs) tremendously.", that was true for first generation SSD drives and using XP, but it has not been an issue for quite a few years now. Modern drives have enough write cycles and moden OS's are aware that they are on a SSD and change their behavior that you will likely replace the computer before it runs out of writes. – Scott Chamberlain – 2014-07-06T15:39:18.750
Just allocate a relatively small, like 50-100MB, pagefile on the boot drive to keep Windows happy, and the put a a bigger one on one of your real hard-drives. In my experience the OS hardly ever utilizes the one boot drive in this scenario. – martineau – 2014-07-06T16:10:11.617
The SSD is already at 5TB writes, which is rather concerning after just 6 months of use. While SSDs have a lot longer lifespan now, being at 30TB in 3 years is definitely not within the projected MTBF of the SSD.
@martineau thats a solution I used for some time, the issue that the only drive not getting completely full is the boot drive. – PTS – 2014-07-06T16:13:56.657
How did you measure 5TB writes? And how do you know it was the page file doing the majority of those writes and not other software? Also 5TB in 6 moths is 28 GB/day, WD's 128 GB drive is rated for 35 GB/Day and their 256 drive is rated 70 GB/Day for a 3 year service life. Only the 64GB drive had a smaller limit of 17.5 GB/Day., So unless you are using a small SSD you are WAY below the maximum recommend GB/Day with 5TB Writes at 6 months. You would need to be at 6.24 TB over 6 Months to match even the medium drive.
– Scott Chamberlain – 2014-07-06T16:38:40.123Thats what the drive is reporting in its internal stats (Samsung drive, stats shown in Samsung Magician). I'm guessing that because writing 32GB nearly every day to the drive since its first use, sums up pretty closely together with my current space usage (just added stuff barely deleted anything). The computer often runs over several days or is sometime off for a few days, so its not exactly 6 months in 32GB/day writes, more like 4-5 months. – PTS – 2014-07-06T17:51:43.217
"Modern" (and cheap) SSDs tend to use MLC/TLC technologies (it's cheaper), whereas old ones (and expensive) use SLC. Knowing that the cycle number haven't increased for one cell and the flash is still the same except that now there are more data in one cell, in fact the lifetime of SSD has decreased. However, modern OSs now use some trick to keep it high enough. (I heard Windows 7/8 disabling by default pagefile when it's not needed) The "new" TRIM command also help. On cheap TLC SSD, don't except much more than 900 cycles on a single cell. Remember to leave a reasonable amount of free space. – piernov – 2014-07-06T23:00:17.363