I have a backend with expressjs making partial responses on some endpoints.
the pseudocode is something like that:
function (req, res, next) {
res.writeHead(200, { 'Content-Type': 'application/json' });
request.a.file(function response (middleChunk) {
res.write(middleChunk);
},
function final (endingChunk) {
res.end(endingChunk);
});
}
using curl -v directly to the expressjs instance works like a charm, showing messages progressively and in the end the endingChunk
;
But i'm not using expressjs directly on my hosting, but i'm using a reverse proxy made with nginx, it's configuration it's like that:
server {
listen 80;
server_name funnyhost;
root /directory;
location ~/api/.* {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_read_timeout 600s;
proxy_buffering off;
}
}
Putting on proxy_buffering off
made the server respond with something:
~$ curl -v "domain.com/api/endpoint?testme=true"
* About to connect() to domain.com port 80 (#0)
* Trying XX.XX.XX.XX... connected
> GET /api/endpoint?testme=true HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: domain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Transfer-Encoding: chunked
< Connection: keep-alive
< Date: Mon, 12 Aug 2013 17:43:45 GMT
< Server: nginx/1.5.3
< X-Powered-By: Express
<
but Nginx awaits for the FULL progress, until the expressjs code reaches res.end()
before sending the data to the client!
I'm reaching desperate level, i'm wasting hours on this to work :( Hope someone could help
ADDITIONAL INFORMATIONS
I'm using:
~# nginx -v
nginx version: nginx/1.5.3
ubuntu 12.04 LTS server, node.js v0.10.15, express 3.3.5
As requested
# nginx -V
nginx version: nginx/1.5.3
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-http_spdy_module --with-ipv6 --with-mail --with-mail_ssl_module --with-openssl=/build/buildd/nginx-1.5.3/debian/openssl-1.0.1e --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-cache-purge
News
I'm trying to contain the possibilities, so i'm using this script https://gist.github.com/mrgamer/6222708
and the following nginx config file https://gist.github.com/mrgamer/6222734
Making the requests in localhost, everything works smoothly, making the requests to my remote VPS the response headers have different order, and different behaviour; the response gets printed out all togheter.
my local pc and remote VPS have both ubuntu 12.04 with chris lea nginx PPA (launchpad.net/~chris-lea/+archive/nginx-devel), for testing purposes i executed on both:
~# sudo aptitude purge nginx nginx-common && sudo aptitude install nginx -y
the "strange" behaviour is listed below
localhost test headers are in the corrected order and response is correctly progressive
~$ curl -v localhost.test
* About to connect() to localhost.test port 80 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost.test
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.5.3
< Date: Tue, 13 Aug 2013 16:04:19 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
<
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Chunked transfer encoding test</title></head><body><h1>Chunked transfer encoding test</h1><h5>This is a chunked response after 2 seconds. Should be displayed before 5-second chunk arrives.</h5>
* Connection #0 to host localhost.test left intact
* Closing connection #0
<h5>This is a chunked response after 5 seconds. The server should not close the stream before all chunks are sent to a client.</h5></body></html>
remote test
~$ curl -v domain.com
* About to connect() to domain.com port 80 (#0)
* Trying XX.YY.ZZ.HH... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: domain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Connection: keep-alive
< Date: Tue, 13 Aug 2013 16:06:22 GMT
< Server: nginx/1.5.3
<
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Chunked transfer encoding test</title></head><body><h1>Chunked transfer encoding test</h1><h5>This is a chunked response after 2 seconds. Should be displayed before 5-second chunk arrives.</h5>
* Connection #0 to host test.col3.me left intact
* Closing connection #0
<h5>This is a chunked response after 5 seconds. The server should not close the stream before all chunks are sent to a client.</h5></body></html>