219

I'm running Windows 7 on a dual core, x64 AMD with 8 GB RAM.

Do I even need a page file?

Will removing it help or hurt performance?

Would it make a difference if this is a server or a desktop?

Does Windows 7 vs. Windows 2008 make a difference with a page file?

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Jason
  • 3,227
  • 8
  • 26
  • 28

13 Answers13

306

TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.

Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.

This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.

Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.

Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.

There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.

Some references:

dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!

quux
  • 5,358
  • 1
  • 23
  • 36
  • 1
    Prepare for a rush of upvotes--this was mentioned in the podcast (http://blog.stackoverflow.com/2009/06/podcast-59/). +1 from me. – mmyers Jun 24 '09 at 18:40
  • 25
    For Jeff and Joel: it rhymes with "ducks" – quux Jun 24 '09 at 23:48
  • 2
    On Solaris it was/is even more involved. The swap file is mirroed in a ram disk like tmpfs so the memory is always almost full - but it is apparently provable that this is the optimal strategy. – Martin Beckett Jun 25 '09 at 19:40
  • 2
    I have long believed that, instead of allowing Windows to manage my page file size, I should set it to a fixed amount (e.g. min 2GB, max 2GB), because letting it grow and shrink can cause fragmentation problems. Is that good thinking, or should I follow your first line and let Windows handle everything? – John Fouhy Jul 29 '09 at 03:26
  • John, my own preference is for system-managed. But you should read the article I linked (Pushing the Limits: Virtual Memory) closely and see how you feel about this. Note carefully the parts where Mark used Testlimit to force the pagefile to grow, creating that stairstep graph. There is a few seconds of delay while the PF grows, so if you have memory demands that come on VERY quickly, this could create issues for you. I believe situations where this is a problem to be corner cases, pretty uncommon. – quux Jul 29 '09 at 16:28
  • 1
    @quux, what about http://support.microsoft.com/kb/889654: "as more RAM is added to a computer, the need for a page file decreases. If you have enough RAM installed in your computer, you may not require a page file at all, unless one is required by a specific application." Am I misreading something or is this a different/special case? – hyperslug Aug 26 '09 at 23:49
  • hyperslug, see Sam's answer below (from Jul 22); it gives a great example of why a paging file can sometimes be useful even when you have a lot of available memory. Nothing in my answer above is meant to suggest that you *must always* have a paging file! Only that when you remove it, you may not envision every circumstance your computer will ever encounter. And the lack of pagefile could one day bite you on the butt, as it did Sam. – quux Aug 27 '09 at 14:22
  • Ha! That said, thank you, hyperslug, for the KB889654 link. It is good to see MS saying in clear print that the pages/sec counter is nearly useless, which I have been saying for quite some time. Do follow their advice to monitor pages input and pages output to get a better handle on whether you need a pagefile. Or better yet - leave the defaults in place. I've never seen a situation where an unused pagefile lowered performance; on the other hand, as Sam shows, having an *unavailable* pagefile can lead to nasty surprises. – quux Aug 27 '09 at 14:25
  • One quibble, it is pre*e*mptively – Rich Seller Dec 10 '09 at 09:17
  • Let's say you don't use more than 50% of your RAM. Why can't the backing store be somewhere else in RAM, instead of on a hard disk? (If that 50% limit is exceeded, then the OS could switch to using the disk as a backing store, obviously. But why should it do so from the very start?) – user541686 Oct 25 '11 at 01:44
  • 3
    @Mehrdad: Because you don't want to have to keep 50% of your RAM free. Sure, you could do that, but you would lose 50% of the physical RAM you could be using as a disk cache for active data. Free RAM is a sign of inefficiency, it's like FedEx trucks at a depot instead of on the road. It means you're *not* moving as much freight as you could because you're spending too much time loading and unloading. – David Schwartz Oct 27 '11 at 09:03
  • @David: So you're assuming that memory that is otherwise unused is always used for caching data? That's a nice idea, but I've **never** seen Windows cache 2 GiB of data. (I have 6 GiB of RAM, and I rarely even use 2 GiB of it at once, except for when I occasionally create 4-GiB RAM disks. Never do I notice Windows caching anything on the order of gigabytes -- it's usually a hundred or so megabytes *max*, after which it starts writing to the disk like crazy.) – user541686 Oct 27 '11 at 09:12
  • 1
    @Mehrdad: I'm not sure why you see that, but what I see is that pretty much all available RAM is used as cache. For example, the box I'm on right now (Win 7, 64-bit, 8GB of RAM) is doing typical desktop work. 385MB is free, 5,440MB is cached. This is as it should be. Free RAM is wasted RAM. You might as well hold data that's also on disk -- it can help (if you wind up needing it) and it can't hurt (you can always throw it away when RAM is scarce). – David Schwartz Oct 27 '11 at 09:20
  • @David: Not sure why it's different, but I have Win 7 x64 too. (I'm talking more about what I've noticed practically, rather than what the system is reporting -- I haven't checked that in a while.) And yes, while I agree that free RAM is wasted RAM, it wouldn't be wasted if it was used for duplicating information the same way the page file would, right? – user541686 Oct 27 '11 at 09:33
  • 3
    @Mehrdad: The problem is, with no page file, there's a lot of data that *must* be kept in RAM even though it will likely never, ever be accessed. Consider, for example, any memory allocated by a process that started on system startup but offers a service that will not be used for days. The system cannot prove the data won't be accessed and it has no place other than RAM to keep it. So the disk cache shrinks while data is kept in RAM that hasn't been accessed for days. – David Schwartz Oct 27 '11 at 09:35
  • 1
    @Mehrdad - keeping RAM 'backing store' in another RAM location would essentially nullify the point of having a backing store in the first place. It exists so that the RAM can be cleared (quickly!) and used for something else, should another demand come up. – quux Oct 27 '11 at 13:38
  • 1
    Also it should be pointed out that pagefile writes are 'lazy' anyway - IIRC they do not take priority over most userland disk i/o. So I'm curious what you think you will actually *gain* by limiting or removing the pagefile and/or moving backing store to RAM? – quux Oct 27 '11 at 14:28
  • @DavidSchwartz: Right, but at what point, exactly, do you have "enough" RAM? It seems like you would make the same claim no matter how much RAM the host has... which seems a little unreasonable to me. At *some* point RAM has to be sufficient, right? quux: You would free up unnecessary disk space, and also avoid unnecessary I/O (which, even though prioritized, still affects other things somewhat). – user541686 Oct 27 '11 at 17:24
  • Until the time when RAM is as cheap and plentiful as disk space, I can see few scenarios where I would be **so certain of all future usecases** in the life of the system that I could *guarantee* pagefile never being needed. The disk usage is cheap compared to the RAM cost. And I don't think the additional, low priority IOs saved are going to matter that much in a properly sized server. In short, when you're down to eliminating pagefile because the system *needs* that disk space or IO, your system is already almost certainly undersized/overloaded. – quux Oct 27 '11 at 18:01
  • @quux: No, I can't guarantee that either. So when I have to compile Chromium (for example), I just **turn on my pagefile**. In all other times, I turn *off* my pagefile. No need to be certain about anything -- a little control on the user's part can go a long way. – user541686 Oct 27 '11 at 21:34
  • @Mehrdad - Well, to each his own. I just wonder if and how you'd quantify the actual performance benefit you're getting when you disable pagefile. Aside from hard disk space, that is - which you clearly have to spare anyway, since you can turn it on at will. Anyway, this is turning into a discussion thread which SF is not well suited to, so I will stop here. Cheers. – quux Oct 27 '11 at 22:32
  • "Removing pagefile entirely can cause more disk thrashing." - you did not explain exactly how having a page file makes any difference in this scenario. – RomanSt Jan 19 '12 at 11:03
  • @romkyns Actually, I did. Read it again? – quux Jan 19 '12 at 11:45
  • Doesn't this imply that with SSDs, it might be better to turn off paging (excluding the case where you actually run out of RAM)? Thrashing wouldn't be so bad because reads from an SSD are really fast. – Mas Nov 07 '12 at 15:29
  • 2
    SSD reads are fast, but memory reads are faster. Pagefile lets you put what's frequently in use, into memory for speedier access. SSD does help on bringing old paged-out stuff back into RAM, which is clearly a good thing. But if SSD were a clean replacement for RAM, then we'd be phasing RAM out and going SSD entirely, right? See this e7 blog entry with some stats that help dispel the idea that pagefile will wear out an SSD: http://goo.gl/Q4nI – quux Nov 14 '12 at 02:03
  • Can I disable the mechanism described in your third paragraph on Win7/8/10 ? In my country HDD laptops are still common, and that preemptive writing to pagefile make them horribly slow, because of near-constant unecessary I/O, the only "solution" I found is disable pagefile, but that has all the issues pointed out before. – speeder May 20 '16 at 03:50
  • speeder: I'm not aware of a way to disable the Memory Manager's use of pagefile as backing store. But I think pagefile writes are FAR less constant than you imagine. Use perfmon's 'Memory\Page Writes\Sec' counter to monitor them on your system, and see for yourself. – quux May 28 '16 at 17:32
  • 1
    I did, ultimately I HAD to go for the "disable pagefile" route. As long pagefile was enabled, disk I/O was almost always at 100% and making the machine horribly slow, disabling pagefile fixed it. – speeder Feb 14 '17 at 19:10
82

Eric Lippert recently wrote a blog entry describing how Windows manages memory. In short, the Windows memory model can be thought of as a disk store where RAM acts as a performance-enhancing cache.

dmo
  • 2,100
  • 1
  • 16
  • 22
50

As I see from other answers I am the only one that disabled page file and never regreted it. Great :-)

Both at home and work I have Vista 64-bit with 8 GB of RAM. Both have page file disabled. At work it's nothing unusal for me to have few instances of Visual Studio 2008, Virtual PC with Windows XP, 2 instances of SQL Server and Internet Explorer 8 with a lot of tabs working together. I rarely reach 80% of memory.

I'm also using hybrid sleep every day (hibernation with sleep) without any problems.

I started experimeting with it when I had Windows XP with 2 GB of RAM and I really saw the difference. Classic example was when icons in Control Panel stopped showing itself one after one, but all at once. Also Firefox/Thunderbird startup time increased dramatically. Everything started to work immediately after I clicked on something. Unfortunately 2 GB was too small for my applications usage (Visual Studio 2008, Virtual PC and SQL Server), so I enabled it back.

But right now with 8 GB I never want to go back and enable page file.

For those that are saying about extreme cases take this one from my Windows XP times.
When you are trying to load large Pivot Table in Excel from an SQL query, Excel 2000 increases its memory usage pretty fast.
When you have page file disabled - you wait a little and then Excel will blow up and the system will clear all memory after it.
When you have the page file enabled - you wait some time and when you'll notice that something is wrong you can do almost nothing with your system. Your HDD is working like hell and even if you somehow manage to run Task Manager (after few minutes of waiting) and kill excel.exe you must wait minute or so until system loads everything back from the page file.
As I saw later, Excel 2003 handles the same pivot table without any problems with the page file disabled - so it was not a "too large dataset problem".

So in my opinion, a disabled page file even protects you sometimes from poorly written applications.

Shortly: if you are aware of your memory usage - you can safely disable it.

Edit: I just want to add that I installed Windows Vista SP2 without any problems.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
SeeR
  • 739
  • 6
  • 10
  • 3
    I've had my pagefile disabled, and regretted it the moment I really used my memory. So be happy you have more memory than you need. – Sam Jul 22 '09 at 09:45
  • 7
    +1, "me too" :-). Same story - 8GB memory, Vista x64, running Visual Studio with ReSharper + SQL Server Express + IIS + 1-2 virtual machines (each with 1500MB memory) + bunch of utilities - never had a problem. – Milan Gardian Sep 29 '09 at 20:17
  • 22
    I love it how everyone is saying "Microsoft has spent many hours thinking about this problem, so don't mess with it", yet completely ignore real world experiences. I've had the paging file disabled since XP and never regretted it. It's like the computer got an injection of awesome. – AngryHacker Oct 20 '09 at 06:36
  • Do your XP virtual machines also have pagefiles disabled? – mpbloch Apr 13 '10 at 22:57
  • @mpbloch No, because I always set VM memory to the lowest required by it's usage. – SeeR May 10 '10 at 18:16
  • 5
    It's pretty standard practice to disable paging on servers that iSCSI boot, paging over the SAN would be noticable slow. You just really have to watch your memory usage, and stay away from the max. – Chris S May 18 '10 at 21:43
  • 8
    -1 I don't see any references in this answer. I actually had my system crash because the page file was disabled and my Paged Pool memory ran full. Yet, my physical memory usage was only at 2 GB... – Tamara Wijsman Dec 04 '11 at 13:59
  • Software Engineer, Photoshop (amatur) Artist and avid gamer here for the past 20 years. I disabled the Paging file back in XP, and have never looked back. At this moment, I have two VMs running while playing BF4 while answering this SF question with 12 tabs open in Chrome. Given, I have always maxed my ram (8, 12, 16, 32, etc) - this particular machine has 32 GB. Instant access and no thrashing of my precious SSDs. The only issues I've had is with Visual Studio and bad code that creates memory leaks, using 20GB of ram! Not a pagefile need - better coding needs! – Eric Duncan Aug 05 '14 at 19:35
  • 2
    I think I can see a pattern here... All anecdotal evidence from Windows XP users. Maybe something has changed in the Windows kernel in 15 years? You saw an improvement back then and now with Windows 7 you're riding on the placebo effect. – ntoskrnl Sep 11 '14 at 12:36
  • I see a pattern too: real-world evidence from Windows XP users. Maybe something has changed in PCs in 20 years? Paging was necessary back then and now even with 16 GB of RAM people are riding on the inertia effect. –  Sep 17 '14 at 18:38
  • Found this thread by accident. But I decided to comment here, I am thinkong of disabling pagefile on WIndows 8, due to the third paragraph on quux awnser. I have a Windows 8 laptop with very slow HDD, Windows behaviour of filling the RAM with cache, and preemptively writing stuff to pagefile, made the system VERY slow and unresponsive, I wanted an alternative beside disabling the pagefile, and asked on SuperUSer about it, but sadly got no decent awnsers. http://superuser.com/q/1066289/27885 – speeder May 20 '16 at 03:48
36

You may want to do some measurement to understand how your own system is using memory before making pagefile adjustments. Or (if you still want to make adjustments), before and after said adjustments.

Perfmon is the tool for this; not Task Manager. A key counter is Memory - Pages Input/sec. This will specifically graph hard page faults, the ones where a read from disk is needed before a process can continue. Soft page faults (which are the majority of items graphed in the default Page Faults/sec counter; I recommend ignoring that counter!) aren't really an issue; they simply show items being read from RAM normally.

Perfmon graph http://g.imagehost.org/0383/perfmon-paging.png

Above is an example of a system with no worries, memory-wise. Very occasionally there is a spike of hard faults - these cannot be avoided, since hard disks are always larger than RAM. But the graph is largely flat at zero. So the OS is paging-in from backing store very rarely.

If you are seeing a Memory - Pages Input/sec graph which is much spikier than this one, the right response is to either lower memory utilization (run less programs) or add RAM. Changing your pagefile settings would not change the fact that more memory is being demanded from the system than it actually has.

A handy additional counter to monitor is PhysicalDisk - Avg. Queue Length (all instances). This will show how much your changes impact disk usage itself. A well-behaved system will show this counter averaging at 4 or less per spindle.

quux
  • 5,358
  • 1
  • 23
  • 36
34

I've run my 8 GB Vista x64 box without a page file for years, without any problems.

Problems did arise when I really used my memory!

Three weeks ago, I began editing really large image files (~2 GB) in Photoshop. One editing session ate up all my memory. Problem: I was not able to save my work since Photoshop needs more memory to save the file!

And since it was Photoshop itself, which was eating up all the memory, I could not even free memory by closing programs (well, I did, but it was too little to be of help).

All I could do was scrap my work, enable my page file and redo all my work - I lost a lot of work due to this and can not recommend disabling your page file.

Yes, it will work great most of the time. But the moment it breaks it might be painful.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Sam
  • 909
  • 4
  • 15
  • 26
  • 7
    you should save more often :D in order to minimize the damages – alexandrul Dec 10 '09 at 05:19
  • 1
    easy to say, when saving does take several minutes, this is a pita. – Sam Dec 21 '09 at 09:46
  • We all are save happy if you use more complex software, but sometimes, a "crash" ends up aligning with "Oh crap, I haven't saved in a while." – Damon Dec 30 '17 at 01:34
  • Enable paging with a small pagefile limit to benefit from not suffering from the severe disadvantages of the bogus Windows paging algorithm at normal times. When enabled you can increase the limit without reboot in case of such exceptional use case requiring more than your physical memory. Later decrease it again before a reboot when tasks are settled. – kxr Aug 27 '22 at 15:47
20

While the answers here covered the topic quite well, I will still recommend this read:

http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx

He talks about PF size almost at the end:

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).

I really like Mark's articles.

Jeff Atwood
  • 12,994
  • 20
  • 74
  • 92
Radim Cernej
  • 443
  • 3
  • 12
  • +1, simple for the Mark Russinovich link. It's worth pointing out that win7 even pops up a notification to point that you will not be able to "trace down system problems" if you disable the swap file. – cgp Mar 21 '11 at 02:59
  • Another theoretical article. Windows´ paging algorithm still does not work reasonably today ... – kxr Aug 27 '22 at 15:50
14

The best answer I can think of is that under a normal load you may not use up the 8 GB, but it is the unexpected loads where you will run into trouble.

With a page file, the system will at least run slowly once it starting hitting the page. But if you remove the page file it will just die (from what I know).

Also, 8 GB seems like a lot now, but a few years down the line it might be considered the minimum amount of memory for a lot of software.

Either way - I would recommend keeping at least a small page file; but others please correct me if I am off-base.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Dave Drager
  • 8,315
  • 28
  • 45
  • 4
    I'd go a bit further and *not* cap the page file. That's not really improving things. Let windows do it...they know better. – Michael Haren Jun 24 '09 at 19:37
  • 1
    Just tried to use Media Player Classic to load a 6GB mkv file. It ran me out of my RAM and pagefile memory. Went back to VLC pretty quick. +1 for the "you never know what you'll run into". Eventually MPC crashed and my RAM was restored, but what if you get a DLL in third party software with a memory leak? You will have a lot more mileage if you have some disk-backed memory to help you out. – mpbloch Apr 13 '10 at 23:00
  • 1
    Plus the point, what's the good of having 8GB if you have to live in constant fear of actually using it?! – David Schwartz Oct 27 '11 at 09:05
  • The "keep at least a small pagefile" seems a little weird to me since it is not clear how Windows is going to use it. For instance, it might thrash it even more than a bigger pagefile that offers more space - I'm guessing, but as long as there is no reliable source on this I would consider the small pagefile advice possibly harmful and recommend a standard practice instead. – mafu Jan 30 '12 at 21:39
  • @MichaelHaren - "...they know better..." - That's why SuperFetch used to work so "well" right??? Also, now that it's 2021 and my Mechanical HDs are seriously slow by today's standards, I'm wondering if a Win 10 Swap file would have a negative impact on the life of a newly purchased SSD. – Shawn Eary Feb 19 '21 at 15:49
6

You didn't mention if it's a 64-bit edition of Windows, but I guess yes.

The pagefile serves many things, including generating a memory dump in case of BSoD (Blue Screen of Death).

If you don't have a pagefile, Windows won't be able to page out to disk if there isn't enough memory. You may think that with 8 GB you won't reach that limit. But you may have bad programs leaking memory over time.

I think it won't let you go hibernate/standby without a pagefile (but I didn't try yet).

Windows 7 / 2008 / Vista doesn't change the use of page file.

I saw one explanation from Mark Russinovich (Microsoft Fellow) explaining that Windows can be slower without page file than wih a page file (even with plenty of RAM). But I can't find back the root cause.

Are you out of disk space? I would keep a minimum of 1 GB to be able to have a kernel dump in case of a BSoD.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Mathieu Chateau
  • 3,175
  • 15
  • 10
  • Do you mean this? http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx If so, his advice is lame. He says page file will increase performance because if would give more ram to other apps. True, if you don't have much ram. But if you have more than enough, page file is never faster. – Pyrolistical Jun 10 '09 at 20:18
  • it was in a sysinternal video with Salomon. It had something to do with kernel page pool – Mathieu Chateau Jun 10 '09 at 21:19
  • You can't post an "answer" when you have no idea: I have a Windows Vista 32-bit laptop with 4GB of RAM and I put it into standby all the time. Can you at least restrict yourself to supplying answers to questions you actually know answers to? – PP. Jan 15 '10 at 17:50
  • What PP tried to say: The hibernation process uses a file separate from the swap file, so this is not an issue in this case. – mafu Jan 30 '12 at 21:41
  • Pyrolistical, it's probably far too late for you to see this, but turn your statement around and phrase it as a question: When does pagefile slow anything down? A good answer to that would prove your theory. – quux Mar 06 '14 at 23:39
6

The only person that can tell you if your servers or workstations "need" a page file is you, with careful use of performance monitor or whatever it's called these days. What applications are you running, what use are they seeing, and what's the highest possible memory use you could potentially see?

Is stability worth possibly compromising for the sake of saving a minute amount of money on smaller hard disks?

What happens when you download a very large patch, say a service pack. If the installer service decides it needs more memory than you figured to unpack the patch, what then? If your virus scanner (rightly) decides to scan this very large pack, what sort of memory use will it need while it unpacks and scans this patch file - I hope the patch archive file doesn't contain any archives itself, because that would absolutely murder memory use figures.

What I can tell you is that removing your page file has far higher probability of hurting than helping. I can't see a reason why you wouldn't have one - I'm sure that there might be a few specialist cases where I'm wrong on that one, but that's a whole other area.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Rob Moir
  • 31,664
  • 6
  • 58
  • 86
5

I disabled my page file (8 GB on an x86 laptop) and had two problems even with 2500 MB free:

  1. ASP.NET error trying to activate WCF service: Memory gates checking failed because the free memory (399,556,608 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment configuration element.

    Quite how 3.7 GB is less than 5% of 8 GB I will never know!!

  2. Getting Close programs to prevent information loss dialog: When 75% of my RAM is used I get a dialog box telling me to close programs. You can disable this with a registry modification (or possibly by disabling the 'Diagnostics Policy Service').

In the end I decided to just turn it back on again. Windows just plain and simple was never designed to be used without a page file. It's optimized to run with paging, not without. If you're planning on using more than 75% of your memory and you don't want to mess with your registry - then it may not be for you.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Simon
  • 1,301
  • 2
  • 15
  • 19
  • 1
    The ASP.NET error strikes me as perhaps being a 32-bit issue, but if the number you provided is correct (399556608 = 399,556,608) then the error is correct - ~400MB is approximately 5% of 8GB. – fencepost Oct 12 '10 at 18:11
  • @fencepost good catch - must have read that as kb for some reason. weird – Simon Aug 26 '16 at 05:07
3

It seems a lot of severely limited people have an opinion on this subject but have never actually tried running their computer without a page file.

Few, if nearly none, have tried. Even less seem to know how Windows treats the pagefile. It doesn't "just" fill up when you run out of physical RAM. I bet most of you didn't even know that your "free" RAM is used as a file cache!

You CAN get massive performance improvements by disabling your page file. Your system WILL be more susceptible to out-of-memory errors (and do you know how your applications respond in that scenario - for the most part the OS just terminates the application). Start-up times from standby or long idle periods will be far snappier.

If Microsoft actually permitted you to set an option whereby the pagefile ONLY gets used when out of physical RAM (and all the file buffers have been discarded) then I would think there was little to gain from disabling the pagefile.

PP.
  • 3,246
  • 6
  • 26
  • 31
  • 1
    Disabling the page file results in performance degradation, you won't see any performance improvement from introducing memory load. Usage of the page file only when you are out of memory is certainly not what you want... – Tamara Wijsman Dec 04 '11 at 14:01
3

Your total memory available is your pagefile + actual memory.

The key question is whether your anticipated total memory usage for all apps and the operating system usage approaches 8 GB. If your average mem usage is 2 GB and your max memory usage is only 4 GB then having a page file is pointless. If your max memory usage is closer to 6-7 Gb or greater than it's a good idea to have a page file.

PS: Don't forget to allow for growth in the future!

-3

This is antidotal, but we run a Windows Server 2003 Terminal Server for about 20 users, with 10-15 logged on at time and has 8 GB of RAM. We do not run with a page file and our server runs faster than it did from before. This obviously is not a solution for everything, but we have run like this for two years now, and have had no issues that I am aware of.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Matt
  • 433
  • 1
  • 5
  • 10
  • 1
    You do have issues but you don't notice them, consider how an increased memory load can slow down simultaneous request. Enabling the page file makes such moments more snappier... – Tamara Wijsman Dec 04 '11 at 14:06