17

I have a webserver that has 8GB of ram and is running a pretty intensive php site (1 site) that does file manipulation, graphing, emailing, forums, you name it. The environment is far from static which leads me to believe that very little could be gained from caching anything in ram since almost every request to the server creates new or updated pages. And a lot of caching is done client side so we have a ton of 304 requests when it comes to images, javascript, css.

Additionally I do have language files that are written to flat files on the server where cached ram definitely is good rather than reading from disk. But there are only a handful of files like this.

In about a two weeks I've gone from having 98% free ram to 4% free ram. This has occurred during a time in which we also push several large svn updates onto the server.

My question is whether my server will be better tuned if I periodically clear my cache (I'm aware of Linus Torvalds' feeling about cache) using the following command:

sync; echo 3 > /proc/sys/vm/drop_caches

Or would I be better off editing the following file:

/proc/sys/vm/swappiness  

If I replace the default value of 60 with 30 I should have much less swapping going on and a lot more reuse of stale cache.

It sure feels good to see all that cache freed up using the first command but I'd be lying to you if I told you this was good for the desktop environment. But what about a web server like I've described above? Thoughts?

EDIT: I'm aware that the system will acquire memory as it needs it from the cache memory but thanks for pointing that our for clarity. Am I imagining things when Apache slows down when most of the server memory is stored in cache? Is that a different issue altogether?

Patrick R
  • 2,925
  • 1
  • 18
  • 27
  • 3
    What sort of advantage would you get from clearing the cache? Having empty memory would mean wasting memory. – sybreon Jan 23 '10 at 08:07
  • 3
    If PHP/Apache/whatever needed more it would be using it and the memory wouldn't be used as cache. – Zoredache Jan 23 '10 at 09:12
  • perceived load time over real load time - that's part of what I'm attempting to explore. I have a followup question that is a good cross reference: http://serverfault.com/questions/108745/linux-swapiness-adjusting-kernel-vm-settings – Patrick R Jan 27 '11 at 21:51
  • "very little could be gained from caching anything in ram since almost every request to the server creates new or updated pages." - For Pete's sake, it surely does this using static source/template files, which would benefit tremendously from being immediately available in RAM. – underscore_d Oct 06 '15 at 01:16

1 Answers1

16

Clearing caches will hinder performance, not help. If the RAM was needed for something else it would be used by something else so all you are doing is reducing the cache hit/miss ratio for a while after you've performed the clear.

If the data in cache is very out of date (i.e. it is stuff cached during an unusual operation) it will be replaced with "newer" data as needed without you artificially clearing it.

The only reason for running sync; echo 3 > /proc/sys/vm/drop_caches normally is if you are going to try do some I/O performance tests and want a known state to start from (running the cache drop between runs to reduce differences in the results due to the cache being primed differently on each run).

The kernel will sometimes swap a few pages even though there is plenty of RAM it could claim back from cache/buffers, and tweaking the swappiness setting can stop that if you find it to be an issue for your server. You might see a small benefit from this, but are likely to see a temporary performance drop by clearing cache+buffer artificially.

David Spillett
  • 22,534
  • 42
  • 66
  • is this method of clearing cache+buffer artificially worse than clearing it by rebooting the server (ignoring the fact that the server is temporarily offline). Am I creating the chance of I/O errors coming up during this process or something like that? – Patrick R Jan 23 '10 at 13:44
  • 2
    PatrickR: Clearing the cache is counter productive. You're increasing load time for anything that would otherwise have been stored in the cache. – Matt Simmons Jan 23 '10 at 16:37
  • @Matt: What you're saying is correct in many situations but what I'm getting at is that I believe that they system I have above doesn't really benefit from caching since very few files are used twice. Seems like I'm caching a heck of a lot that will never be used again. I'm thinking I should be more aggressive about reclaiming cache rather than creating additional cache. – Patrick R Jan 23 '10 at 20:39
  • 5
    It's not just the files, it's things like cached metadata and more. Ever do an 'ls' on a huge directory, and have it take forever, then do it again, and it's instantaneous? It was stored in memory using cache. We both know that the Linux VM systems are complex. Why not just let it handle it? The only time your applications will run over into swap is if they request > the actual free memory at once, and then only until cache is freed – Matt Simmons Jan 23 '10 at 21:28
  • @Matt: good example. can anyone sound off on adjusting the swappiness default? I'm going to adjust mine to get some values to share. – Patrick R Jan 25 '10 at 04:28
  • If anyone has any additional insights on changing the default of 60 in swapiness I greatly appreciate it. – Patrick R Jan 26 '10 at 01:46
  • 1
    @PatrickR: I have not played with the kernel VM settings in recent kernels myself. You'll probably be better off asking about that in a new question specific - you might get vmem experts chiming in who did not join in on this cache related question. – David Spillett Jan 26 '10 at 11:09
  • `sync && /sbin/sysctl -w vm.drop_caches=3` – Asclepius Jan 21 '14 at 23:44