5

We have been running nginx -> uWSGI, and now we are evaluating putting Varnish as a caching layer between nginx and uWSGI (similar to http://www.heroku.com/how/architecture).

But, nginx only supports HTTP 1.0 on the back so it will have to create new connections with Varnish for each request.

Many recommend running nginx in front of Varnish, but wouldn't it make much more sense to use something like Cherokee so that you eliminate the HTTP connection overhead since it supports HTTP 1.1 on the back?

espeed
  • 159
  • 5
  • Why don't use nginx caching? – rvs Apr 18 '11 at 08:44
  • Persistent connections would allow for more data throughput. My tests with Nginx and memcached showed that without keep alive Nginx could fetch from memcached at a rate of 10k/sec while with keep alive it would do 12k/sec. Question is, do you *really* need this difference? Is this worth spending precious developer time on? – Martin Fjordvald Apr 18 '11 at 13:04
  • 1
    Probably not -- Roberto De Ioris, the creator of uwsgi, just posted on the mailing list that he is working on adding uwsgi protocol support to both varnishd and haproxy so nginx may soon be able to speak to Varnish in uwsgi. As he says, "http parsing is 300% slower than uwsgi parsing." – espeed Apr 18 '11 at 14:12
  • nginx caching doesn't support stuff like ESI for dynamic pages. Also see http://www.varnish-cache.org/trac/wiki/ArchitectNotes – espeed Apr 18 '11 at 15:02

1 Answers1

1

We debated this as well when putting in our backend cache layer, we are also using nginx but with squid and a JVM that serves content.

If you aren't using any functionality that is unique to nginx you could switch, we had already built a couple nginx modules.

You should consider what the actual overhead of that connection setup is vs the end-to-end request. For us when testing it was always <2ms even reading an asset from memory cache took more than that to respond (>5ms).

polynomial
  • 3,968
  • 13
  • 24