7

First some background: We have an embedded device that uploads a lot of small events to a web server. Chunked encoding is used to post this information. Every event is send as a seperate chunk, so the webserver (node.js) can react immediately to the events. All this is working like a charm.

Disabling the servers and running netcat on the server shows what the device sends:

sudo nc -l 8080
POST /embedded_endpoint/ HTTP/1.1
Host: url.com
User-Agent: spot/33-dirty
Transfer-Encoding: chunked
Accept: text/x-events
Content-Type: text/x-events
Cache-Control: no-cache

120
{"some","json message"}
232
{"other","json event"}
232
{"and a lot more","up to 150 messages per second!"}
0

Now I installed nginx on my webserver (version 1.6.0). In the end I want it handle SSL and I want to handle speed up normal web traffic.

If I now enable nginx with this server config:

server {
  listen 8080;

  location / {
    proxy_http_version 1.1;
    expires off;
    proxy_buffering off;
    chunked_transfer_encoding on;
    proxy_pass http://localhost:5000;
  }
}

Then I receive this:

sudo nc -l 8080
POST /embedded_endpoint/ HTTP/1.1
Host: localhost:5000
Connection: close
Content-Length: 2415
User-Agent: spot/33-dirty
Accept: text/x-events
Content-Type: text/x-events
Cache-Control: no-cache

{"some","json message"}
{"other","json event"}
{"and a lot more","up to 150 messages per second!"}

The problem is that this is buffered, the requests are now send every 2 seconds. All messages are included and can be handled. But there is a delay now...

Can I instruct nginx to just forward my chunks directly? This only has to happen for this embedded endpoint. I understand that there is not much use in nginx when you would disable this for all endpoints.

Sunib
  • 73
  • 1
  • 1
  • 4

3 Answers3

6

If you can upgrade to Nginx 1.8.x or Nginx 1.9.x, you can now use this directive to disable request buffering:

proxy_request_buffering off

Think this should solve your problem.

James Gan
  • 376
  • 1
  • 5
  • 10
0

With proxy_buffering off, nginx shouldn't be buffering the chunked responses from the backend. You don't need to set chunked_transfer_encoding on explicitly, it's the default. The next step I'd take towards diagnosis is to watch the stream between nginx and the backend (a quick bit of tcpdump -i lo -n port 5000 should do the trick) to see if nginx is, in fact, buffering, or if the behaviour of the backend has changed for some reason.

womble
  • 95,029
  • 29
  • 173
  • 228
  • Thanks for responding and clarifying the use of proxy_buffering. The problem is that I don't manage to *forward* chunked requests to my backend. I'm sure that this happens because I also have access to my backend with a direct tcp/ip port, then it just works. – Sunib Aug 20 '15 at 09:34
  • I understand your problem, and I'm providing my recommended next steps for debugging. – womble Aug 20 '15 at 23:10
0

For me the remedy were these two settings:

In the file: /etc/nginx/nginx.conf

Add:

proxy_max_temp_file_size 0;
proxy_buffering off;

Between the lines client_max_body_size 128M; and server_names_hash_bucket_size 256;:

http {

client_max_body_size 128M;
proxy_max_temp_file_size 0;
proxy_buffering off;
server_names_hash_bucket_size 256;
algenib
  • 171
  • 1
  • 1
  • please don't post the exact same answer multiple times. In that case, answer once and flag as duplicat.e – Sven Jun 14 '18 at 16:50