12

I want to make a server for my static content.
I need to serve some 3-10 mb files - a lot. (I will also put on this server some .js and .css and images from my websites).
I thought of nginx and G-WAN ( http://trustleap.com/ ).
What I don't know is what resources are needed for serving static content? How much RAM is used for each file transfer?
If I will go with a 256 mb (or 512 mb) VPS with good port and huge bandwhich how many hits /seconds will I be able to serve (3-10 mb files)? (I know "it depends" - but please give me a rough estimation based on experience or theory).
There are not a lot of files, just often downloaded - should I consider caching, or this will only use my memory needed for serving hits?

Gil
  • 307
  • 3
  • 12
cripox
  • 225
  • 1
  • 2
  • 6

3 Answers3

10

If you're using nginx, then you're talking just a few KB of overhead per active connection. If you're using something like Apache, you'll have one thread per connection, which means hundreds of KB or even megabytes per connection.

However, nginx does not support asynchronous disk IO on Linux (because async disk IO on Linux is basically horribly broken by design). So you will have to run many nginx worker processes, as every disk read could potentially block a whole worker process. If you're using FreeBSD, this isn't a problem, and nginx will work wonders with asynchronous disk and network IO. But you might want to stick with Apache if you're using Linux for this project.

But really, the most important thing is disk cache rather than the web server you choose. You want lots of free RAM so that the OS will cache those files and make reads really fast. If the "hot set" is more than say 8 GB, consider getting less RAM and an inexpensive SSD instead, as the cost/benefit ratio will likely be better.

Finally, consider using a CDN to offload this, and getting a really cheap server. Serving static files is what they do, and they do it very fast and very cheaply. SimpleCDN has the lowest prices, but MaxCDN, Rackspace, Amazon, etc. all are big players at the low end of the CDN space.

rmalayter
  • 3,744
  • 19
  • 27
  • Thanks for all the info. So, if I chose FreeBSD or other UNIX should be ok to serve what I need (1 - 10 TB) from a vps with less memory using nginx? What about windows? – cripox Sep 21 '10 at 15:46
  • Makes me wonder what software CDN's use though it could be nginx with freebsd...anyway it appears that if you use nginx + linux, and frequently exceed the RAM disk cache (a rare case?) there is an "offload i/o to threads" module as of 1.7.11: https://www.nginx.com/blog/thread-pools-boost-performance-9x/ I agree that just getting more RAM would help as well, to keep more disk cached in the page file. – rogerdpack Mar 01 '16 at 18:11
  • @rogerdpack: Cloudflare and MaxCDN are both known to use nginx on their edge-facing machines. Both blog about it frequently. – rmalayter Mar 02 '16 at 05:18
  • 1
    Is this info still correct 6 years later? – Adam Baxter Nov 12 '16 at 06:15
  • 1
    Unfortunately yes, async file IO on Linux is still broken by design in 2016 (limited to uncached full block reads and writes). Useful for database servers but not much else. To my knowledge nginx has not implemented a user-space thread pool to emulate general-purpose async file IO on Linux. https://lwn.net/Articles/671649/ – rmalayter Nov 12 '16 at 16:09
  • 2
    To anyone from the future.. nginx does have thread pools https://www.nginx.com/blog/thread-pools-boost-performance-9x/ – Matt Apr 07 '20 at 11:05
6

If the OS can cache the hot part of the content into ram, it will not use the disk and will serve things really quickly. Hundreds of request per second should be possible on a VPS, you will most likely saturate the network well before you run into CPU limits.

If the content does not fit into ram, then disk IO (seek, throughput, filesystem fragmentation) will come into play and the equation changes.

The webserver will add a memory overhead per client, but nginx can do that in a few kilobyte per connection.

Hope these pointers can help you.

Joris
  • 5,939
  • 1
  • 15
  • 13
  • Thanks, but what is not clear to me is if there is memory overhead per connection: i.e. if nginx or gwan consumes memory for every hit? If I have 10 request of a 5 mb file in the same time, will this mean there will be 50mb memory used for serving it? Maybe + memory for threads (I don't know if nginx or gwan uses threads for evey connection). – cripox Sep 17 '10 at 11:44
  • 2
    Per open connection they require some memory. 10 concurrent requests (at any time there are 10 tcp connections open sending/receiving the file) will require 10 times a few kilobytes. This has nothing to do with the contents, so the 5MB does not apply here. – Joris Sep 17 '10 at 12:59
3

what resources are needed for serving static content? How much RAM is used for each file transfer?

First, for the same number of workers, G-WAN v4.7+ is using far less RAM than Nginx at startup:

> Server 'nginx' process topology:
---------------------------------------------
  6] pid:21228 Process RAM: 0.77 MB
  5] pid:21229 Process RAM: 2.44 MB
  4] pid:21230 Process RAM: 2.44 MB
  3] pid:21231 Process RAM: 2.44 MB
  2] pid:21232 Process RAM: 2.44 MB
  1] pid:21233 Process RAM: 2.44 MB
  0] pid:21234 Process RAM: 2.44 MB
---------------------------------------------
Total 'nginx' server footprint: 15.39 MB

> Server 'gwan' process topology:
---------------------------------------------
  6] pid:6054 Thread
  5] pid:6053 Thread
  4] pid:6052 Thread
  3] pid:6051 Thread
  2] pid:6050 Thread
  1] pid:6049 Thread
  0] pid:5839 Process RAM: 2.19 MB
---------------------------------------------
Total 'gwan' server footprint: 2.19 MB

G-WAN uses threads (one per core typically), Nginx uses processes (one per core typically), and processes drag more overhead, require synchronization via shared memory, etc. Both use the "asynchronous" model of event handling.

Note that here G-WAN can automatically grow to more than 1 million of concurrent connections while Nginx is limited to its worker_connections settings (defined at only 4096 in the ab.c test above).

what is not clear to me is if there is memory overhead per connection: i.e. if nginx or gwan consumes memory for every hit?

The short story is that G-WAN v4.7+ (where in-memory caching is disabled by default) consumes much less RAM than Nginx, for all file sizes, while serving more requests per second.

The long story is that while Nginx consumes more and more memory even with new HTTP keep-alived requests, G-WAN's memory usage can stay stable for HTTP keep-alived requests, and it grows far less than for Nginx with non-keep-alived requests.

Our weighttp wrapper ab.c measures the memory consumption of the server application and of the system for the duration of the test. And it shows that Nginx puts a heavier weight on the system regarding the memory resources consumption.

This is due to the way each web server is handling requests and allocating memory.

If I have 10 request of a 5 mb file in the same time, will this mean there will be 50mb memory used for serving it? Maybe + memory for threads (I don't know if nginx or gwan uses threads for evey connection).

Both servers (Nginx and G-WAN) use sendfile() so the kernel (rather than the application) is allocating the resources for I/O.

The web servers will still allocate resources, but that's for maintaining the context of each connection rather than to buffer disk I/O.

Therefore, the momory consumption depends on the size of the file chunks sent at each sendfile() call rather than directly on the total file size.

The totfile size has an influence on the long-term for high concurrencies, but that's due to the amount of chunks that need to be cached by the kernel.

Any more question, drop us a line at G-WAN. We have heavily invested in CDN-like applications.

rogerdpack
  • 575
  • 2
  • 8
  • 22
Gil
  • 307
  • 3
  • 12