First some background: We have an embedded device that uploads a lot of small events to a web server. Chunked encoding is used to post this information. Every event is send as a seperate chunk, so the webserver (node.js) can react immediately to the events. All this is working like a charm.
Disabling the servers and running netcat on the server shows what the device sends:
sudo nc -l 8080
POST /embedded_endpoint/ HTTP/1.1
Host: url.com
User-Agent: spot/33-dirty
Transfer-Encoding: chunked
Accept: text/x-events
Content-Type: text/x-events
Cache-Control: no-cache
120
{"some","json message"}
232
{"other","json event"}
232
{"and a lot more","up to 150 messages per second!"}
0
Now I installed nginx on my webserver (version 1.6.0). In the end I want it handle SSL and I want to handle speed up normal web traffic.
If I now enable nginx with this server config:
server {
listen 8080;
location / {
proxy_http_version 1.1;
expires off;
proxy_buffering off;
chunked_transfer_encoding on;
proxy_pass http://localhost:5000;
}
}
Then I receive this:
sudo nc -l 8080
POST /embedded_endpoint/ HTTP/1.1
Host: localhost:5000
Connection: close
Content-Length: 2415
User-Agent: spot/33-dirty
Accept: text/x-events
Content-Type: text/x-events
Cache-Control: no-cache
{"some","json message"}
{"other","json event"}
{"and a lot more","up to 150 messages per second!"}
The problem is that this is buffered, the requests are now send every 2 seconds. All messages are included and can be handled. But there is a delay now...
Can I instruct nginx to just forward my chunks directly? This only has to happen for this embedded endpoint. I understand that there is not much use in nginx when you would disable this for all endpoints.