Why do default settings on 64-bit Windows still imply a large page file?

0

My question is based on the observation that since the introduction of 64-bit Windows, many more people around me seem to fall for page-file related lockups of their systems. E.g., something simple as zeros(2e5) in MATLAB allocates not just 1600 kB of zeroes, but 160 GB. Thanks to the 64-bit address space, this amount can be addressed, but very often not im RAM. (I could name many examples like this that all reduce to memory allocation.) As a result, Windows resorts to using the page file, and stores 160 GB of zeros on the disk. Only after that (or a forced shutdown) your system becomes responsive again.

In 32-bit Windows, this problem was largely prevented by the address space of at most 4 GB.

So why does Microsoft still configure a large page file by default? According to my observation, the default size is larger with larger RAM, which does not really make sense to unless you have a really large, really fast drive such as a terribly expensive SSD. I tend to turn off page files on 64-bit Windows installations for that reason. Or shouldn't I? What are reasons for or against doing this?

bers

Posted 2018-06-08T12:19:05.347

Reputation: 557

The page file is an integral part of memory management on Windows. If you disable the page file, you will not be able to use all physical memory. Please read up on how virtual memory works on Windows. – Daniel B – 2018-06-08T12:27:42.613

Answers

3

Windows does not do lazy allocation of memory like Linux does and the amount of allocatable memory is dependant on the page file.

On Linux there is the assumption that any "sparse" allocation of memory will not be fully utilised and so it lets programs continually overallocate memory until it is physically exhausted. In the case that programs actually meet their committed memory but there is insufficient RAM or swap to meet the demand then it will start killing processes.

Windows does the opposite and assumes that all allocated memory will eventually be used and so all reservations are honoured until physical memory + page file size is met. If a program is able to allocate memory then it is able to use it. If it is not able to allocate memory then it has hit the limit of RAM and page file size.

By disabling the page file a program that allocates memory it is not using (I.e. it assumes a page file) can cause you to run out when you still have some "free".

If you want to make use of all of your RAM on a Windows system then you should let the system function as intended by giving it a page file.

Mokubai

Posted 2018-06-08T12:19:05.347

Reputation: 64 434

Minor nitpick: You cannot disable virtual memory. – Daniel B – 2018-06-08T12:59:17.593

@DanielB I had tried not to confuse swap with virtual, but sadly my old mind failed me. Thanks for pointing it out. :) – Mokubai – 2018-06-08T13:02:12.417

Thanks for your answer. Unfortunately, it does not address the specific setting of my question, which I (implied to) describe as having lots of RAM and a less-than-lightning-fast disk. For your explanation to be practically relevant, in most cases, serious amounts of RAM need to be page-file'd, which takes a lot of disk time. – bers – 2018-06-09T17:28:00.140

1

The pagefile provides two major benefits to the OS, neither of which is significantly influenced by a 64 bit OS.

  1. The pagefile increases the commit limit.

When an application allocates memory a Windows OS promises or commits itself to have available sufficient storage for it, even under a worst case scenario. This storage may be in RAM or the pagefile. The commit limit is defined as RAM size plus pagefile size minus a small overhead. With no pagefile the commit limit will be somewhat less than RAM size. The memory manager keeps track of the total allocated memory to ensure that this never exceeds the commit limit.

With no pagefile the commit limit is a hard limit that cannot be increased while the OS is running. With the default pagefile configuration the commit limit is not only much larger but it is a soft limit that can be increased by expanding the pagefile when this is necessary.

Hitting the commit limit in Windows is a bad thing. Most applications don't deal with this eventuality very well and the OS itself often cannot tolerate it.

  1. The pagefile optimizes RAM usage.

At any given time a computer is likely to contain a great deal of data that has not been accessed for a long time and in fact may never be accessed during the session. The memory manager of course has no way of knowing how important this data is so it must keep it somewhere.

Storing all this rarely used data in high speed RAM is a serious misuse of this precious resource. If RAM did not have this burden there would be more available for application use and for caching purposes. Caching is a really big deal in a modern OS and is a major contributor to good performance.

The pagefile provides a place where the memory manager can offload this rarely used data and relieving RAM from this duty. It is true there will be a cost in doing this but remember it is rarely used data so it should not be serious. And the memory manager has numerous optimizations to minimize this cost.

But rather than thinking of this as a cost, consider it an investment in performance. Just as making wise investments with money is a good thing, the memory manager invests a little time in using the pagefile in the anticipation that it will bring big dividends later. It usually works.

This is not some new idea. It has been used in Windows and Linux for many years and in large computer systems long before that. It is a tried and true principle that has been optimized over the decades.

Bottom line, let Windows manage the pagefile as it wishes. The designers know what they are doing. Unfortunately Microsoft has not done a very good job of communicating this to users and there are many misconceptions. A great deal of what you read on the Internet about the pagefile at the least contains serious errors.

LMiller7

Posted 2018-06-08T12:19:05.347

Reputation: 1 849

Thanks for your answer. Unfortunately, it does not address the specific setting of my question, which I (implied to) describe as having lots of RAM and a less-than-lightning-fast disk. For your explanation to be practically relevant, in most cases, serious amounts of RAM need to be page-file'd, which takes a lot of disk time. – bers – 2018-06-09T17:28:09.870

The answer is not quite correct. If you simply malloc(lots_of_bytes), even though committed v.a.s. has to committed, it doesn't actually take up space anywhere (not even in the pagefile) until you access it. The first time you read a page out of such an area Windows will allocate a physical page full of zeroes for you. It seems per your description taht MATLAB is insisting on writing zeros pro-actively to the region, apparently not trusting Windows to do the right thing. That's not a fault of the OS. – Jamie Hanrahan – 2018-06-09T20:04:18.187