7

I have seen a somewhat dated tutorial that suggest serving html files with a ramdisk like this:

mkfs -q /dev/ram1 102400

I also find another source that use something like this:

mount -t tmpfs -o size=1024 none /mnt/rds

Are these two methods equally valid? I am using Centos 6.3 with nginx. So in practice I want to serve the files in /usr/share/nginx/html from RAM.

And in case I mounted the disk, do I have to mount again whenever there is a genuine change in the original folder?

Ladadadada
  • 25,847
  • 7
  • 57
  • 90
StCee
  • 231
  • 3
  • 14
  • As is evidenced by @michael-hampton's answer, this question is unclear. The title talks about serving html from ramdisk, but the actual questions asked are not about serving html files, but about the differences between two ramdisk implementations. It would be better if you were asking one single clear question. – kojiro Feb 22 '13 at 18:46

4 Answers4

32

Why bother? Linux is just going to cache them in RAM anyway, the first time they get read from disk. And if they're read frequently enough they'll always be cached.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • 1
    Hmm, I guess technically this does answer the question in a roundabout way. Are you saying "yes, the two methods are equally [in]valid"? Despite all the upvotes, though, it really sidesteps the question. I shall abstain from voting in either direction. – kojiro Feb 22 '13 at 18:45
  • 11
    We're generally all about practical solutions here. Neither ramdisk "solution" is nearly as practical as just letting Linux handle it itself, like it's been doing for many years. – Michael Hampton Feb 22 '13 at 18:46
4

From your question (last paragraph), I assume you think that the ramdrive will have the same contents as the original file system below. That's not the case. You will have an empty directory and need to fill it first. I don't think this is what you want.

Linux has a very good cache system. Every memory page which is not used for application memory will be used as cache. This means: even without a tmpfs (the method I would recommend), your file will stay in memory until there is real need to flush it from there.

Given it really happens and your memory gets too full:

  • if you use tmpfs, your tmpfs contens will move to your swap memory, which means, it is also saved on disk and not faster anymore than using a real file system.
  • if you don't use tmpfs, your cached version will be flushed from memory, which consumes almost no time. when it is accessed next time, it will be read from disk and comes back into the cache.

So I don't see any advantage to use tmpfs as long you don't generate those files dynamically and in very short interval. Linux normally is much more efficient if you let it decide over memory usage and swapping.

Daniel Alder
  • 533
  • 1
  • 8
  • 19
3

The tmpfs method has less overhead. In the /dev/ram1 example, you have an entire filesystem with inodes, directories, etc stored in a block device. With tmpfs it is essentially only the disk cache.

Yes, if you create a ramdisk and copy files into it, you need to copy those files again whenever there is a change.

Zan Lynx
  • 886
  • 5
  • 13
0

Without saying that using an RAM disk is a good idea: I remember having read an article (unfortunately no URL at hand) about benchmarking the two kinds of RAM disks which surprisingly showed relevant differences (I have forgotten which was better).

Hauke Laging
  • 5,157
  • 2
  • 23
  • 40
  • I'd +1 this if you found the article, or at least could summarize if from memory. As it is, though, your answer doesn't really answer the question in any way, and would really be better as a comment. – Ilmari Karonen Feb 22 '13 at 19:08