0

This is essentially the reverse of "Linux: how to explicitly unswap everything possible?".

I want to maximize the amount of available free memory before running a process I know will use system memory intensively and I don't want it to pause for long periods of time until the OS gets it into its head that everything else should be swapped out.

Also, I know a lot of programs have memory they only use on initialization and then never touch again.

How do I make this happen?

I have tried doing sysctl vm.swappiness=100 but that hardly swaps out anything.

Sven
  • 97,248
  • 13
  • 177
  • 225
Omnifarious
  • 388
  • 3
  • 13
  • Interesting question. It sounds to me like you want to essentially use a windows style memory manager but on Linux. I'm not sure that exists. – Jim B Sep 27 '17 at 16:13
  • There was a big yellow box under your last question explaining everything you needed to know about why the question was closed. Read it next time and stop throwing around baseless accusations about motivations to close it. – Sven Sep 27 '17 at 16:21
  • Also, next time *edit* your question and let it run through review (or use a flag), don't delete and post a new. – Sven Sep 27 '17 at 16:23
  • @Sven - That big yellow box explained nothing. I already knew everything in it. I just didn't know why you, personally, thought my question didn't belong here. I assumed you decided because it included a code sample and a reference to running top that you assumed it belonged on a code-oriented site or on superuser, but I didn't know. I didn't see anything wrong with it. Also, in my experience, when questions get stuck in that state, they get ignored forever and might as well be deleted, especially when nobody bothered to post a comment as to why they were broken. – Omnifarious Sep 27 '17 at 16:27
  • On StackOverflow, I can close with a single click, and I never do that unless I've left a comment and tried to work with the person a little to improve their question. It's dismissive and shows no respect to the person who spent the time to write down a question and post it. – Omnifarious Sep 27 '17 at 16:29
  • @JimB - I have a program I use to force this situation by intentionally creating immense memory pressure. But it's tricky to run and requires too much decision making to figure out when it's done as much as it can. It'd be nice to just tell the kernel "Flush every dirty page to swap.". – Omnifarious Sep 27 '17 at 16:31
  • @JimB - Also, it chews a lot of CPU, and so if I were running on a cloud instance that charged for CPU time, I'd be wasting money for something that shouldn't use much CPU time at all. – Omnifarious Sep 27 '17 at 16:52
  • This will have the reverse of the effect you desire. With everything swapped out, almost anything that process tries to do will require I/O to fault back in what was swapped out. And every other process that tries to run will saturate the I/O as well. – David Schwartz Sep 27 '17 at 19:03
  • @DavidSchwartz - _nod_ That makes sense. Though I will say that I run this program that ups the memory pressure and then kill it before I launch the program I want. Still, that means large pieces of shared, non-dirty text (like glibc) may end up being evicted by the program I run and end up having to be paged back in by the memory hungry process before it really gets going. – Omnifarious Sep 27 '17 at 20:33

2 Answers2

2

The unused initialization code will be freed as soon as the memory is needed for other purposes. (It will be backed by files from which it is read.)

The memory paging mechanisms on Linux are well designed and have been tested for years. It is rare you would want to swap any process out of memory. This would result in heavy paging activity any time the swapped process is scheduled for execution.

If you truly need the memory from the other applications, you have too little memory. You can prevent the other programs from executing by sending them a STOP signal with the kill command. Be careful which programs you stop or you could lock yourself out of the system.

If you are experiencing large pauses during startup of your process, consider using sar to determine where the bottleneck is. You can also use top to determine which process are being paged or swapped heavily. Don't be surprised if your process shows up as the problem.

I've run servers which were severely starved for memory. To perform startups, it was essential to limit the number of processes starting at any one time. Process start almost instantaneously even if memory is far over-committed.

If you really want to force everything possible out of memory you could write a program that allocates the desired amount of memory and continually writes to each page of the allocated memory for a few loops. It will experience all the issues you want to avoid.

BillThor
  • 27,354
  • 3
  • 35
  • 69
  • The program I wrote does exactly what you say. I write to one byte of each page. It chews a lot of CPU though, which would be a waste if I ran it on a cloud platform that charged for CPU. But otherwise you're right. And from the other answers and comments it's clear there's no clever trick with `/proc` to do what I want. – Omnifarious Sep 28 '17 at 21:46
1

You may be able to achieve what you're trying to accomplish by dropping the caches (pagecache, dentries and inodes).

echo 3 > /proc/sys/vm/drop_caches

will clear them all.

More information can be found at: Linux Memory Management - Drop Caches

  • It does not swap out the memory allocated by existing processes. – Tero Kilkanen Sep 27 '17 at 20:30
  • Yes, reading that, it looks like it does everything that's bad about the program I run, and none of the good stuff. What I would like is to force every piece of memory that's either a. dirty or b. hasn't been accessed in the last 30 seconds and doesn't have any backing store allocated to be immediately flushed. sync will flush dirty things, but will not allocate backing store for process memory that hasn't been used recently. – Omnifarious Sep 27 '17 at 20:41
  • Percona Server uses the technique I listed followed by allocating the RAM required (for InnoDB buffer pool anyway) at the process start instead of allocating it when necessary. The goal there was to provide uniform NUMA assignments across multiple nodes (by freeing up the space first). If the system has heavy disk activity, dropping the caches will free up a ton of RAM. It doesn't answer the original question but this feels more of an X Y problem to me. –  Sep 27 '17 at 20:49
  • @yoonix, as in "OP wants to do X, but is mistaken and really wants Y?"? I can believe that. :-) Mostly, when I run this thing, I notice that its pages get evicted because it's using a lot of memory. But the memory access pattern is all over the map, and eventually those pages are going to have to come back. I want it to evict _other_ processes pages instead. – Omnifarious Sep 27 '17 at 21:56
  • Unless you have control over the other processes, what's to stop them from waking back up and pulling it all back out of swap stomping right over everything you just tried to accomplish? If you can prevent it from running, shut it down. If you NEED swap you're doing it wrong. –  Sep 27 '17 at 22:53
  • @yoonix - Well, I could put them in the freezer group freeze them. Then they definitely wouldn't need those pages. :-) Or, I could just hope that mostly they won't need them, which when I do it using my memory pressure technique, turns out to be true. Once the big process establishes its (very large) RSS the kernel mostly leaves it alone. And it takes a lot less time to get there if I evict everything else first. – Omnifarious Sep 28 '17 at 01:02