You will be fine even with 1GiB (and likely less) of swap. My work computer typically uses no more than 140-150 MiB. A gigabyte is plenty of over-provisioning for that.
Unless you do compute tasks that require datasets in the hundreds of gigabytes and (this one is important!) data is accessed in a more or less access-once fashion, you will never want to have a swap much larger than that. But then again, simply memory mapping a datafile works equally well for that application.
But more swap helps more, right? More of anything is always better!
Consider what difference a swap of, say, 16GiB will make (or think of 64GiB). If you never use these 16GiB, you could as well not have them set aside in the first place. But if you do use them, what happens? Disk, compared to main memory, is exceedingly slow. Even with a SATA-600 SSD, transferring 16GiB takes between 30 and 40 seconds, and 2-4 times as long on some other configurations.
Now someone will inevitably object that you are rather paging in and out a dozen or so 4kiB pages, not 16GiB in one go. While that is true, the point nevertheless stands. If you only need to swap in and out a couple of pages, you don't need 16GiB of swap, but if you do need 16GiB of swap, then you are going to transfer them, too (one way or another).
In theory, 99.9% of all users could even use a 64GiB machine (or any 8+GiB machine) without any swap, and most likely never notice something missing. However, this is not adviseable.
First, it is sub-optimal because the operating system has fewer choices in what it can discard when it runs out of physical memory. There are two things it can do: Swap out something that isn't used, or throw away pages from the buffer cache. If you have no swap, there is only one thing it can do. Throwing away pages from the buffer cache is harmless, but it may noticeably impact performance.
Second, private anonymous mappings might simply fail if there is no swap. That usually won't happen, but eventually when there is not enough physical memory available to satisfy them all, and there is no swap, the operating system has only either this choice, except...
Third, the dreaded OOM killer may kick in. Which means a more or less random process gets to get killed. No thank you. This is not something you want to have happening.
With that said, advice such as you need a swap X times the amount of RAM installed comes from people who repeat something they heard (and didn't understand!) from someone who repeated something they heard (and didn't understand!) decades ago.
The "use 2X your RAM" rule was an easy to remember rule of thumb in the 1980s and 1990s, it was never the "golden truth" (just something that worked OK for most users), and it doesn't apply at all nowadays.
You should have a reasonable amount of swap which you can easily afford (say, a gigabyte), so the OS can page out some stale stuff, and so the world doesn't immediately end when you once ask for a little more memory. But that's it.
1Instead of relying on swap explicitly, if you know the size of your working set up front, or are willing to do a bit more low-level memory management, consider using
mmap
to allocate your working-set pages. Then your amount of swap will be exactly the amount you need for your process. – fluffy – 2014-07-08T04:51:20.0035The advice recommending "twice the amount of RAM" dates back to old times, when computers had little RAM. Several docs states that it is primarily applicable to computers with < 2GB RAM. Above that, swap size is mostly related to what you're doing with the machine. – John WH Smith – 2014-07-08T14:03:03.897
See also this Server Fault q&a - if you're running Java (and possibly other apps), you want to make sure you have enough swap for them to increase their memory allocations. I personally stick with the RHEL standard of RAM+2 for my swap partition
– warren – 2014-07-08T17:06:40.5802It's a shame most of the comments here were removed. Adding back in: It's worth mentioning, incidentally, that if your kernel supports it you may wish to mount your swap partition with
discard
on an SSD. Also (and this was mentioned in an answer below), don't forget you can use a file instead of a partition for potentially easier management (and no performance hits on an SSD due to e.g. fragmentation). – Jason C – 2014-07-08T18:11:21.173SSD's newer than 2013 are quite reliable now. Putting a swap file on them doesn't seem to wear them out any faster than a regular spinning disk. I've got a 2 year old drive with a swap on it and it's still going well. And I have much less RAM. – Matt H – 2014-07-09T01:49:53.247
You could also setup zRam. Give it a higher priority than the disc based swap and you could mitigate swapping to disc a little bit longer.
– fho – 2014-07-10T09:01:34.480Are you going to hibernate (supend-to.disk) your machine? – Reinstate Monica - M. Schröder – 2014-07-11T11:11:08.577
1If you have a memory intensive application, like SVM learning, and you run out of ram and start swapping, everything will become too sluggish to recover and your only available move is going to be pulling the plug (that happened to me a couple time). You probably want your process to be OOM-killed if it start swapping, so that you can at least change stuff and start again. Maybe with SSD drives it's not so bad though. I would check the OOMK settings too - it happened to me on Ubuntu that sometimes processes got OOM-killed when there was still plenty ram left because they allocated aggressively – pqnet – 2014-08-06T14:01:36.310