92

In our servers we have a habit of dropping caches at midnight.

sync; echo 3 > /proc/sys/vm/drop_caches

When I run the code it seems to free up lots of RAM, but do I really need to do that. Isn't free RAM a waste?

Vikelidis Kostas
  • 927
  • 1
  • 6
  • 15
ivcode
  • 1,062
  • 1
  • 9
  • 13
  • 64
    Find the person who put this in and ask him why he did it. As you correctly guessed, there is no obvious good reason for it. – Michael Hampton May 20 '14 at 03:15
  • 2
    That person is no longer employed. So I can't ask him. When I ask others, they said it is good practice to free up ram but I don't see the point. What are the cases that I should use above code anyway? – ivcode May 20 '14 at 03:37
  • 12
    Debugging the kernel. That's about it. This doesn't actually free up any RAM; it drops caches, as the name suggests, and thus reduces performance. – Michael Hampton May 20 '14 at 03:38
  • 1
    BTW we have a server on VMware that don't have lot of memory and we have a cronjob monitoring it's ram with `vmstat 2 3|tail -1|awk '{print $4}'` when the value reduces more than some amount it drops caches otherwise server will hang – ivcode May 20 '14 at 03:44
  • 29
    @ivcode Then you should find and fix the problem with that server rather than trying to avoid the conditions that cause it. If my car stalled every time I made a sharp right turn, avoiding sharp right turns is a lousy fix. – David Schwartz May 20 '14 at 05:01
  • Thank you so much @David for your clear explanations. This made me taking the matter to the software developer rather than finding quick fixes – ivcode May 20 '14 at 08:00
  • I can guess only that perhaps it was a measure to cut data losses, maybe because of frequent crashing/panics/power loss – EkriirkE May 20 '14 at 08:33
  • 7
    Related http://thedailywtf.com/Articles/Modern-Memory-Management.aspx Strongly arguing it's a bad idea. – Drunix May 20 '14 at 09:22
  • 2
    @EkriirkE to cut data losses only `sync` would be sufficient, dropping caches is a no-op for this purpose. – Ruslan May 20 '14 at 15:12
  • 7
    Related, and a useful description of the "problem": http://www.linuxatemyram.com/ – Bill Weiss May 20 '14 at 16:47
  • 3
    `sudo killall -r .*` also frees a lot of memory – Max May 20 '14 at 21:01
  • perhaps the person who put that in is Patrick R: http://serverfault.com/questions/105606/deleting-linux-cached-ram – Colin Pickard May 21 '14 at 16:54
  • 1
    @Max `sudo killall -s KILL -r .*` ;) – Nathan C May 22 '14 at 15:44
  • 4
    It's probably "system guano". The person who put it there may not remember why it's there, or if it works, or why it works if it works. Maybe nobody knows why it's there. It remains because "if it works, don't break it". In systems with poor configuration control this crap accumulates. The long-term answer is to improve configuration/change/revision management for your systems. A configuration management system like CFEngine, Chef or Puppet won't stop you from doing some stupid things, but you'll have to be _consistently_ stupid, which (we hope) is more likely to be caught and dealt with. – Scott Leadley May 23 '14 at 15:26

15 Answers15

95

You are 100% correct. It is not a good practice to free up RAM. This is likely an example of cargo cult system administration.

David Schwartz
  • 31,215
  • 2
  • 53
  • 82
  • 14
    +1 for mentioning Cargo Cult System Administration. Any sysadmin who doesn't know that term and what it means should be fired. – Tonny May 20 '14 at 10:22
  • 11
    @Tonny: We would be left without sysadmin department then :( – PlasmaHH May 20 '14 at 19:44
  • @PlasmaHH Unfortunately that is the situation most us find themselves in... The one thing worse than a sysadmin who doesn't know about CargoCult is an ICT manager who doesn't know AND is who runs his department like CargoCult. I worked for such a one once upon a time. I left that place after 3 months and I swore NEVER to do that again. (They went bankrupt 7 months later... Partly due to ICT mismanagement breaking their Sales system beyond repair.) – Tonny May 20 '14 at 19:56
  • 2
    Like most of humanity, I love terse brash assertions with lots of approval, but a cite or reasoning would earn my superego's +1. – Aaron Hall May 23 '14 at 20:22
  • 1
    @AaronHall For RAM to provide any benefit, it must be used. It's the RAM your system is *using* that is improving its performance. – David Schwartz May 25 '14 at 20:31
  • 2
    Explain the cargo-cult administration, as well as the above, if you don't mind. Maybe in a follow-on edit? I'm still withholding my +1... :P – Aaron Hall May 26 '14 at 02:56
  • 1
    @Tonny "I left that place after 3 months and I swore NEVER to do that again" sounds like cargo-cult employer selection to me :) – qris May 27 '14 at 08:39
  • @qris Yes, maybe. In my defense: It was my first job out of university and I didn't know any better. I was just glad to have a job and initially it looked good. It was a very frustrating experience at the time, but in hindsight it was highly educational. – Tonny May 27 '14 at 19:10
  • 1
    SUSE recommends this method to try to deal with memory pressure. https://www.suse.com/communities/blog/sles-1112-os-tuning-optimisation-guide-part-1/ – Dan Pritts Apr 11 '16 at 20:02
  • @DanPritts That's pretty depressing. Even more depressing, they don't explain what the "issue" is that they claim it deals with. – David Schwartz Apr 11 '16 at 21:31
  • 2
    "its possible that though your application may not be using these RAM but Linux is caching aggressively into its memory and even though the application needs memory it wont free some of these cache but would rather start swapping." Not very specific. In practice, memory management isn't perfect, and having a knob to turn when that imperfection shows up is a good thing. – Dan Pritts Apr 12 '16 at 18:07
  • @DanPritts That's not an imperfection. That's a huge win. That way, if you do run into memory pressure, you don't have to write out pages then, you can just discard them. – David Schwartz Apr 12 '16 at 20:37
  • 2
    Caching aggressively is a win. Caching so aggressively that your application starts to swap...not so much. – Dan Pritts Apr 13 '16 at 01:16
  • The world is more dirty now, how does all of this sit in the hyper converged world where you have ballooning vms sat ontop of heavily cached file systems and block storage? – krad Feb 25 '20 at 09:46
  • Try using ubuntu and see how nice it is for you ram cache to be at 20GB and have application hang because there is not enough free ram. – Alex G Jul 07 '21 at 16:11
  • @DanPritts That's incorrect. The earlier you start swapping the better because earlier on, the extra I/O has no effect on performance because you aren't I/O limited. By the time you actually need to swap, you *are* I/O limited. So it's a huge win to have written stuff out already so you can discard it from RAM without having to write it out when I/O is precious. – David Schwartz Apr 04 '22 at 21:21
  • Interesting point - having it written to swap but still "cached swap" is certainly reasonable. That isn't what I meant, though. An imbalance between application memory and disk cache is a bad thing, as you surely understand. – Dan Pritts Apr 05 '22 at 20:45
64

Yes, clearing cache will free RAM, but it causes the kernel to look for files on the disk rather than in the cache which can cause performance issues.

Normally the kernel will clear the cache when the available RAM is depleted. It frequently writes dirtied content to disk using pdflush.

devicenull
  • 5,572
  • 1
  • 25
  • 31
ananthan
  • 1,490
  • 1
  • 17
  • 27
  • 22
    +1 for explaining *why* it's a bad idea. – Ogre Psalm33 May 20 '14 at 17:00
  • @ananthan - A post on `rsync` suggests dropping caches - https://unix.stackexchange.com/a/510800 – Motivated Jan 03 '20 at 19:28
  • @Motivated And it makes some sense if you do not trust your memory fully (i.e. non-ecc RAM may have a flipped bit in the cache segments), but not for speeding up things, but to minimize the chance that memory errors change your rsync results. On a server with ecc memory, the chances of that happening is so astronomically low that you should not bother. – P.Péter Apr 22 '20 at 14:51
36

The reason to drop caches like this is for benchmarking disk performance, and is the only reason it exists.

When running an I/O-intensive benchmark, you want to be sure that the various settings you try are all actually doing disk I/O, so Linux allows you to drop caches rather than do a full reboot.

To quote from the documentation:

This file is not a means to control the growth of the various kernel caches (inodes, dentries, pagecache, etc...) These objects are automatically reclaimed by the kernel when memory is needed elsewhere on the system.

Use of this file can cause performance problems. Since it discards cached objects, it may cost a significant amount of I/O and CPU to recreate the dropped objects, especially if they were under heavy use. Because of this, use outside of a testing or debugging environment is not recommended.

Cristian Ciupitu
  • 6,226
  • 2
  • 41
  • 55
Joe
  • 461
  • 3
  • 4
  • Of course, depending on what you are trying to do, even a full reboot might not sufficiently clear the disk cache. – user May 21 '14 at 15:01
  • 1
    "these objects are automatically reclaimed by the kernel when memory is needed" is the design goal but it might not always be the actual behavior. – Dan Pritts Jan 14 '15 at 15:51
  • @DanPritts What precisely makes you think it's not so? – Joe Jan 28 '15 at 03:24
  • 3
    The obvious case is when you want to clear out RAM to allow the allocation of more (non-trnsparent) hugepages; another case is transparent hugepage garbage collection pause bugs (see my answer/comments elsewhere on this question). But my comment was intended for the general case. Sometimes the people who are operating the system know better than the people who designed/implemented it. Often, not - that's what their comment is trying to protect against. I'm just glad that the – Dan Pritts Jan 28 '15 at 15:24
29

The basic idea here is probably not that bad (just very naive and misleading): There may be files being cached, that are very unlikely to be accessed in the near future, for example logfiles. These "eat up" ram, that will later have to be freed when necessary by the OS in one or another way.

Depending on your settings of swappiness, file access pattern, memory allocation pattern and many more unpredictable things, it may happen that when you don't free these caches, they will later be forced to be reused, which takes a little bit more time than allocating memory from the pool of unused memory. In the worst case the swappiness settings of linux will cause program memory to be swapped out, because linux thinks those files may be more likely to be used in the near future than the program memory.

In my environment, linux guesses quite often wrong, and at the start of most europe stock exchanges (around 0900 local time) servers will start doing things that they do only once per day, needing to swap in memory that was previously swapped out because writing logfiles, compressing them, copying them etc. was filling up cache to the point where things had to be swapped out.

But is dropping caches the solution to this problem? definetly not. What would be the solution here is to tell linux what it doesn't know: that these files will likely not be used anymore. This can be done by the writing application using things like posix_fadvise() or using a cmd line tool like vmtouch (which can also be used to look into things as well as cache files).

That way you can remove the data that is not needed anymore from the caches, and keep the stuff that should be cached, because when you drop all caches, a lot of stuff has to be reread from disk. And that at the worst possible moment: when it is needed; causing delays in your application that are noticeable and often unacceptable.

What you should have in place is a system that monitors your memory usage patterns (e.g. if something is swapping) and then analyze accordingly, and act accordingly. The solution might be to evict some big files at the end of the day using vtouch; it might also be to add more ram because the daily peak usage of the server is just that.

PlasmaHH
  • 391
  • 2
  • 6
  • All the apps on my server is running on nohup. Maybe nohup.out is being cached and eating up memory? – ivcode May 21 '14 at 08:27
  • @ivcode: This could be a reason, check how big nohup.out is. Maybe use vmtouch to figure out how much of it is cached. – PlasmaHH May 21 '14 at 08:32
  • I have a cron job to `cat /dev/null > path/nohup.out` in every 15 minutes as nohup.out is growing rapidly. Maybe linux is caching nohup.out even if I'm clearing it – ivcode May 21 '14 at 08:41
  • 5
    @ivcode If you don't need the output from `nohup` you should re-direct it to `/dev/null`. It sounds like you had some very inexperienced sysadmins working on your systems at some point. See http://stackoverflow.com/questions/10408816/how-to-use-unix-command-nohup-without-nohup-out for how to direct `nohup`'s output to `/dev/null` – David Wilkins May 21 '14 at 13:28
  • although nohup.out is cleared in 15 min intervals, if apps process got killed for some reason, nohup.out will be automatically backedup from another script. i tried vmtouch. it's a very good tool indeed – ivcode May 21 '14 at 15:02
  • +1 for a more in-depth explanation. – Dan Pritts Jan 14 '15 at 15:53
20

I have seen drop caches to be useful when starting up a bunch of virtual machines. Or anything else that uses Huge Pages such as some database servers.

Huge Pages in Linux often need to defrag RAM in order to find 2MB of contiguous physical RAM to put into a page. Freeing all of the file cache makes this process very easy.

But I agree with most of the other answers in that there is not a generally good reason to drop the file cache every night.

Zan Lynx
  • 886
  • 5
  • 13
  • 1
    I upvoted for pointing out second order prejudice is responses to drop caches. – Noah Spurrier May 23 '14 at 08:44
  • 1
    Also, in HPC applications on high-memory nodes (1Tb), reading in a few large files results in a large amount of memory cached. Because many HPC applications perform malloc's of hundreds of GB, the system can stall for hours as migration processes move tiny chunks of fragmented memory fruitlessly across NUMA nodes once the system reaches the cached memory "border". Worse, nothing you can do in userland to free the caches except trick the system into allocating all the tiny 2MB blocks it can at once then releasing, letting hugepaged defrag and the apps run normally. – user1649948 Mar 18 '17 at 02:16
  • +1 The command to create large pages (`sysctl -w vm.nr_hugepages=...`) refuses to even work unless I first drop caches (Arch linux). – Aleksandr Dubinsky May 24 '17 at 19:13
8

It is possible that this was instituted as a way to stabilize the system when there was no one with the skills or experience to actually find the problem.

Freeing resources

Dropping caches will essentially free up some resources, but this has a side effect of making the system actually work harder to do what it is trying to do. If the system is swapping (trying to read and write from a disk swap partition faster than it is actually capable) then dropping caches periodically can ease the symptom, but does nothing to cure the cause.

What is eating up memory?

You should determine what is causing a lot of memory consumption that makes dropping caches seem to work. This can be caused by any number of poorly configured or just plain wrongly utilized server processes. For instance, on one server I witnessed memory utilization max out when a Magento website reached a certain number of visitors within a 15 minute interval. This ended up being caused by Apache being configured to allow too many processes to run simultaneously. Too many processes, using a lot of memory (Magento is a beast sometimes) = swapping.

Bottom Line

Don't just assume that it is something that is necessary. Be proactive in finding out why it is there, have the guts to disable it if others suggest it is wrong, and observe the system - learn what the real problem is and fix it.

4

Linux/m68k actually has a kernel bug which causes kswapd to go crazy and eat up 100% CPU (50% if there’s some other CPU-bound task, like a Debian binary package autobuilder – vulgo buildd – running already), which can (most of the time; not always) be mitigated by running this particular command every few hours.

That being said… your server is most likely not an m68k (Atari, Amiga, Classic Macintosh, VME, Q40/Q60, Sun3) system ;-)

In this case, the person who put in the lines either followed some questionable or, at best, outdated advice, or got the idea about how RAM should be used wrong (modern thinking indeed says “free RAM is RAM wasted” and suggests caching), or “discovered” that this “fixes”[sic!] another problem elsewhere (and was too lazy to search for a proper fix).

mirabilos
  • 679
  • 1
  • 7
  • 20
  • "a kernel bug which causes kswapd to go crazy" - Which bug is this? – Ben Aug 03 '15 at 19:06
  • @Ben see [this thread](http://thread.gmane.org/gmane.linux.debian.ports.68k/12193/focus=12199) (this message and a couple of followups, one of which includes a guess where it could come from) – mirabilos Aug 04 '15 at 10:54
  • 1
    I'm experiencing a similar issue ( although it's x86_64 ) and the only solution at this moment is to drop caches http://serverfault.com/questions/740790/kswap-using-100-of-cpu-even-with-100gb-of-ram-available – Fernando Dec 04 '15 at 15:08
  • 2
    @Fernando I have a “drop caches” cronjob on the m68k box as well ☹ – mirabilos Dec 04 '15 at 15:28
4

I can think of one plausible reason to do this in a nightly cron job.

On a large system, it may be useful to periodically drop caches so you can remove memory fragmentation.

The kernel transparent hugepage support does a periodic sweep of memory to coalesce small pages into hugepages. Under degenerate conditions this can result in system pauses of a minute or two (my experience with this was in RHEL6; hopefully it's improved). Dropping caches may let the hugepage sweeper have some room to work with.

You might argue that this is a good reason to disable transparent hugepages; OTOH you may believe that the overall performance improvement from transparent hugepages is worth having, and worth paying the price of losing your caches once a day.


I've thought of another reason you would want to do it, although not in a cron job. Right before a virtualization system migrates a VM to new hardware would be a very good time for this. Less memory contents to copy to the new host. You'll eventually have to read from the storage, instead, of course, but I'd probably take that tradeoff.

I don't know if any of the virt software actually does this.

Dan Pritts
  • 3,181
  • 25
  • 27
  • 1
    Do you have any source for this? This sounds like something that should be fixed in the kernel if it's such an issue. – gparent Jan 14 '15 at 15:59
  • 3
    I have personal experience with the pauses with transparent hugepages. RHEL6, Dell R810, 4CPUs, 64GB RAM. Disabling transparent hugepages (there's a /proc file to do so) immediately fixed the pauses. I didn't try the cache drop technique at the time; instead I reconfigured our java apps to use non-transparent hugepages, and left transparent hugepages disabled. IIRC, we looked into the situation enough to realize that we weren't the only people affected, and that Red Hat knew about the issue. – Dan Pritts Jan 14 '15 at 16:07
  • Hello Dan, I constat the same behaviour on my server. I work, with a huge amount of data, and there is drastic performance fall after 10+ computations of a same python program (x2-3 of the first computation time). If I take a look, memory cache size is huge, 100+GB. And If I flush this memory cache, and re-run my program, I get back my initial computation time. Do you have any document or info, to share about this phenomenon? Thank You. – Axel Borja Nov 24 '16 at 16:41
  • 1
    https://access.redhat.com/solutions/46111 describes it. You can disable transparent hugepages to see if that is the problem in your case. – Dan Pritts Nov 24 '16 at 18:49
3

One reason might be the site is running some kind of monitoring, that checks the amount of free ram and sends a warning to administrators when free ram drops below a certain percentage. If that monitoring tool is dumb enough not to include cache in the free ram calculation, it might send false warnings; regularily emptying the cache could suppress these warnings while still allowing the tool to notice when "real" ram gets low.

Of course, in this kind of situation, the real solution is to modify the monitoring tool to include cache in the free ram calculation; cleaning the cache is just a workaround, and a bad one as well, because the cache will refill quickly when processes access the disk.

So even if my assumption is true, the cache-cleaning is not something that makes sense, it's rather a workaround by someone who isn't competent enough to fix the primary problem.

Guntram Blohm
  • 469
  • 2
  • 6
2

It's old, and refer to the most accepted answers/response for why not to do it, but there is one place that I've seen this as a requirement inside the guest : a hosting provider (name not mentioned) that provides very cheap VMs, but they appears to be heavily oversubscribing, and uses this to keep their systems "usable", ie. you don't have very fast IO all the time, and they clean the caches to not do things like a ballooning driver. This goes that mostly people don't use more than a fraction of the RAM allocated, thus by cleaning the caches (frequently) they keep the actual RAM usage in the guests and thus the host as low as possible. ie. more VMs less physicals, more income

Hvisage
  • 356
  • 2
  • 6
2

Just to add my two cents: The system knows very well that these memory pages are caches, and will drop as much as needed when an application asks for memory.

A relevant setting is /proc/sys/vm/swappiness, which tells the kernel during new memory allocations to prefer to drop memory caches or to swap "idle" allocated memory pages.

Gaurav Parashar
  • 113
  • 1
  • 7
aularon
  • 156
  • 2
2

The question is from 2014, but as the problem exists to this day on some hidden centos 6.8 backends, it may still be useful for someone.

https://github.com/zfsonlinux/zfs/issues/1548 describes an issue with zfs. There, disk space isn't freed for deleted files because if nfs is used on top of zfs the file's inodes aren't dropped from the kernel's inode cache.

To quote from the bug thread, behlendorf, Jan 6 2015 wrote:

The current speculation is that for some reason the NFS server is keeping a cached version of the file handle. Until the NFS server drops this file handle ZFS can't unlink this file. Some light testing has shown that dropping caches on the server will cause this reference to be dropped (like the NFS file handle) at which point the space is correctly freed. Memory pressure can also cause it to be dropped.

i.e. a nightly echo 3 > /proc/sys/vm/drop_caches is the easiest fix for that bug if you don't want to have a downtime for restructuring your zfs.

So maybe not cargo cult admining, but some pretty good debugging was the reason.

Iridos
  • 21
  • 1
0

This may make sense on NUMA (non uniform memory access) systems, where, typically, each CPU (socket) can access all the memory transparently but its own memory can be accessed faster than other socket's memory, in association with parallel HPC applications.

Many simple parallel applications tend to do file I/O from a single process, thus leaving on exit a big fraction of memory on a single NUMA node allocated to disk cache, while on the other NUMA node the memory may be mostly free. In these situations, since the cache reclaiming process in the Linux kernel, as far as I know, is still not NUMA-aware, processes running on the NUMA node which has memory allocated to cache are forced to allocate memory on the other NUMA node, as long as there is free RAM on the other node, thus killing the performances.

However, in an HPC system, it would be wiser to clean cache before starting a new user job, not at a specific time with cron.

For non parallel applications this problem is unlikely to arise.

Davide
  • 51
  • 1
  • 4
0

When your page cache is quite large (a lot larger than your current swap usage), and swap in and swap out happens in turns, this is when you need to drop caches. I have seen cases where memory usage increases in one of my MariaDB database servers running Ubuntu 16.04LTS, and Linux just chose to increase swap usage instead of removing unused page caches. Transparent hugepages already disabled in my system because TokuDB required it to be disabled. Anyway maybe it is not a bug, but linux still doing this behaviour is quite puzzling to me. Various sources stated that Linux would remove page cache when application requested it :

But the reality is not that simple. The workaround is either :

  1. Execute drop cache periodically
  2. Execute drop cache when needed (monitor using vmstat 1 for swapping out activities)
  3. Advise linux to remove certain files from cache (like apache log files) using utility such as dd or python-fadvise. See https://unix.stackexchange.com/questions/36907/drop-a-specific-file-from-the-linux-filesystem-cache

Example dd run :

dd if=/var/log/apache2/access_log.1 iflag=nocache count=0

Example python-fadvise :

pyadvise -d /var/log/apache2/access_log.1

-5

I have a desktop machine with 16GB of RAM running on PAE kernel. After a an hour or two the disk performance degrades dramatically until I drop the caches so I simply put it into cron. I don't know if this is a problem with PAE kernel or with the cache implementation being so slow if there is plenty of memory.

kyku
  • 97
  • 2
  • 7
  • 9
    This is a prime example of the "cargo cult" system administration: rather than locating and solving the problem, you are simply masking it. – Michael Hampton May 23 '14 at 13:03
  • 2
    Sometimes the expedient solution is the right one. It might just be putting off resolving the real problem, or it might be as much solution as is required in the circumstances. Even if it's bad practice, it's still not "cargo cult." There's a demonstrated cause and effect: drop caches and disk performance improves. – Dan Pritts Jan 14 '15 at 15:49
  • 1
    Part of the original definition of CCSA was a tendency to mistake correlation for causation, and here we are. Masking a problem by addressing a correlated but not causal entity is suboptimal problem-solving, which is what the concept of CCSA is trying to warn against. – underscore_d Oct 06 '15 at 01:04