1

I'm doing some file processing that seems to require an enormous swap file: even 20Gb isn't enough. What's the theoretical maximum? Running swapon on a 1Tb file resulted in:

swapon: /mnt/big/swap.swap: swapon failed: Invalid argument

The system in question is an Ubuntu VM running on OpenStack, and the drive is NFS mounted. Answers that are broader than this are fine, too though.

Steve Bennett
  • 5,539
  • 12
  • 45
  • 57
  • 2
    If what you are doing needs THAT much swap, you're going about it the wrong way. Even if 50gb is enough theoretically, it will be SO slow that it will never finish since disks are several orders of magnitude slower than ram. – psusi Jun 14 '12 at 01:03

2 Answers2

2

The error message here probably comes not from the size of the swap file per se, but from its location on an NFS mount. There is nothing wrong, I believe in a 1 TB swap file. Imagine what sort of swap will be there on a multiprocessor SMP with 4TB RAM!

In order to swap on a remote file you can do as follows:

  # losetup /dev/loop0 /mnt/big/swap.swap
  # mkswap /dev/loop0
  # swapon /dev/loop0
Dmitri Chubarov
  • 2,296
  • 1
  • 15
  • 28
0

Microsoft suggest that, "it is four times the physical RAM in the computer, rounded to the next 4 megabytes (MB)."

But I have to agree with psusi. There must be a better way to process that file. For example, if you can grab chunks of it at a time. I was able to parse a very big XML file this way. And for files where you are only access one line at a time, uses less memory than loading the whole file into memory and then parse it.

NiteRain
  • 200
  • 1
  • 2
  • Probably, but I'm using someone else's script. Doing it this way was efficient in my time (ie, create swap file, run their script, go away for a few hours.) It actually got to about 55% processed with a 50Gb swap file. And it only needs to be done once. – Steve Bennett Jun 20 '12 at 04:54