5

I'm still seeing system administrators using the old rule of thumb that swap should be double the memory, even in servers with 32GB of memory.

These systems have relatively expensive disks (shipping with 200GB drives) and allocating 64GB of that to swap seems a bit excessive.

I was wondering how much swap do you allocate for your Solaris servers, and why?

I saw there were other similar questions, but mostly focused on Linux. For Solaris there is another consideration deciding on your swap space because the /tmp filesystem is usually shared with swap.

Andre Miller
  • 222
  • 5
  • 10

5 Answers5

7

The swap = 2 * memory is coming from the old days when during a kernel crash the kernel dumped the memory to the swap device and you were able to check what happened after you rebooted your system. Nowadays for example linux is skipping this at all, and I don't run my system with enabled dump file setup. So, this double the size of the memory to get the swap is not valid anymore, because there is not unusual that you have 16/32/64GB of ram and then you obviously wasting disk space following this old story when the disk was much much bigger then the RAM. So long story short, if you don't want to dump and analyze kernel crashes in production there is no logical reason to keep this principle and just give your system couple of G swap, usually i give 2/4 G, because i want to avoid huge IO load caused by swapping.

Istvan
  • 2,562
  • 3
  • 20
  • 28
3

It is no longer necessary to have swap in Solaris. If you know that your system will run completely in memory then you can set it to zero. Unless disk space is really a problem set it to the largest you can get away with as you will need some in critical situation.

Solaris FAQ myths and facts

I normally set the value to be the same as RAM unless the system is likely to need a lot more.

maxschlepzig
  • 694
  • 5
  • 16
David Allan Finch
  • 273
  • 1
  • 2
  • 11
  • 2
    While I disagree with blanket statements like this, I would agree that it MAY NOT be necessary to have swap in Solaris- I found that some servers may well run fine within memory, and adding swap only increases the amount of time for a runaway app to crash while trying allocate the Universe. So, a better answer is, if your apps do not require more memory, you may not necessarily need swap in Solaris. +1 to bring you back from negative. – kmarsh Oct 22 '09 at 13:12
  • Thanks. I remember being on a Solaris Sys Admin course and being told that it was not required any more. Would I set-up a system without space? No. I would use 2xRAM as I am an old time from SunOS 3 days ;) – David Allan Finch Oct 22 '09 at 14:15
  • "trying to allocate the Universe." That's going to be my new log entry for when various research applications go into Sorceror's Apprentice mode. – Bill B Oct 22 '09 at 15:08
3

It is recommended to have enough RAM for all of your application actively used memory to fit in it and still have enough space for allowing the various kernel managed caches and other dynamic buffers to keep performance optimal. Otherwise you'll have too much pagination and the system will underperform.

On the other hand, it is mandatory to have a swap large enough for all the memory reservation to be honored. Otherwise, your applications will randomly crash. This is not related to RAM usage. Be aware that Solaris, unlike Linux and others doesn't over-commit memory.

It is a bad practice not to allocate at all a swap area with Solaris as part of your RAM will just be wasted. It is common to have 50% of reserved but unused virtual memory so I would suggest at a rule of thumb to have a swap area sized between 50% and 100% of the RAM. There are specific uses where paginating a lot might make sense so larger swap spaces too.

jlliagre
  • 8,691
  • 16
  • 36
  • I am dealing with an issue that I have a process that grows to about 40GB. I run on a Solaris machine with 128GB ram and 16GB swap configured. The process fails without any errors. Could this be because of insufficient swap? Where did you get that mandatory swap requirement from? – James Dean Oct 21 '10 at 20:24
  • You do not provide enough details. A process can fail for plenty of reasons. What is your virtual memory usage when the process fails ? I didn't wrote having a swap is mandatory, just that it has to be large enough for all memory reservations to fit. No swap at all might be large enough if you have "too much" RAM. – jlliagre Oct 21 '10 at 20:54
  • top shows almost no free memory and about 13 GB of the swap still free when the process starts allocating most of its memory. I am not sure if the memory is really not free or just all used for disk caching. (This is actually on-site at a customer I just get some top reports.) – James Dean Oct 22 '10 at 15:01
  • Memory (RAM) used for disk caching can be reported as used or free depending on the filesystem. Also, better to stick with standard Solaris tools like vmstat and prstat instead of top which might give inconsistent values on Solaris. – jlliagre Oct 22 '10 at 21:49
2

I would disagree with the argument that swap is no longer required. If you are using heavy applications, like, EDA, Space, Petroleum, Weather forecasting. The applications usually run of Physical Memory. So, it depends on the kind of applications your running, In short, there is unfortunately no single rule that fits everyone. You will have to decide on the swap size based on your application requirements.

LOhit
  • 96
  • 1
  • 1
    My main Solaris development system has 16GB of RAM, 16GB of swap, and I wish I had more of both-- but then I'm dealing with problem sets that are best categorized as 'freakishly huge'. Swap space on Solaris should be configured based on expected usage of the machine. – Bill B Oct 22 '09 at 12:50
1

This depends at lot on your applications.

Because Solaris doesn't not seem to be able to overcommit memory, you may have to add tons of swap even if it's not physically used.

Benoît
  • 1,331
  • 3
  • 11
  • 23
  • *Because Solaris doesn't not seem to be able to overcommit memory* A bit old, but that's stated as if there's something wrong with never overcommitting memory. There's nothing wrong with never overcommitting memory. What's **WRONG** is to have some "OOM killer" wake up because some user process used too much memory and wind up with that OOM killer killing the database process - on a critical server that has the sole purpose of running that very database. If you *really* care about uptime, availability, and reliability you *disable* memory overcommit on any critical system. – Andrew Henle Sep 17 '17 at 13:28
  • What I meant is that because of some JVM processes, i used to allocate up to 8x the RAM size as swap (like 128GB). Definitely cheaper that buying DIMM for nothing. – Benoît Sep 19 '17 at 15:18
  • For JVMs, I'd argue the Linux approach of overcommiting memory borders on the insane, especially for Java processes that are intended to be long-lived. Because given enough time those JVMs *will* use the memory they ask for, and if memory is overcommited something important will be summarily killed. Allocating swap is better than getting your map-reduce processes killed just because a lot of new (and important) data rolls in. – Andrew Henle Sep 19 '17 at 15:31
  • Remember that I wrote "This depends at lot on your applications" and "you may". I was merely describing what I needed to do to keep my users happy. At the same time, I hope you realize that a lot of applications are doing very large malloc without ever using all of the pages. I specially remember cancelling a $60k order of DIMM about this. In the end, what I was monitoring was the actual pagefile usage and the page swap in/out rate. That's the true indicator for saturation on Solaris, not the swap usage. – Benoît Sep 19 '17 at 19:49