6

I have an image upload service that nginx proxies requests to. Everything works great. Sometimes though, the server already has the image the user is uploading. So, I want to respond early and close the connection.

After reading the headers and checking with the server, I call Node's response.end([data][, encoding][, callback]).

Nginx barfs and returns a blank response:

[error] 3831#0: *12879 readv() failed (104: Connection reset by peer) while reading upstream

My guess is that nginx assumes something bad happened in the upstream server, drops the client connection immediately without sending the upstream server's response.

Does anyone know how to properly respond to and close the client's connection when nginx is the proxy? I know this is possible to do: see: sending the response before the request was in

Here is the nginx conf file:

worker_processes 8; # the number of processors
worker_rlimit_nofile 128; # each connection needs 2 file handles

events {
  worker_connections 128; # two connections per end-user connection (proxy)
  multi_accept on;
  use kqueue;
}

http {
  sendfile on;
  tcp_nopush on; # attempt to send HTTP response head in one packet
  tcp_nodelay off; # Nagle algorithm, wait until we have the maximum amount of data the network can send at once
  keepalive_timeout 65s;

  include nginx.mime.types;
  default_type application/octet-stream;

  error_log /usr/local/var/log/nginx/error.log;
  log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

  gzip off;

}

upstream upload_service {
  server 127.0.0.1:1337 fail_timeout=0;
  keepalive 64;
}

location /api/upload_service/ {
  # setup proxy to UpNode
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_set_header Connection "";
  proxy_pass http://upload_service;

  # The timeout is set only between two successive read operations
  proxy_read_timeout 500s;
  # timeout for reading client request body, only for a period between two successive read   operations
  client_body_timeout 30s;
  # maximum allowed size of the client request body, specified in the "Content-Length"
  client_max_body_size 64M;
}
thesmart
  • 130
  • 1
  • 8
  • 1
    Nginx is not a web browser and it has different constraints about HTTP synchronousness than what's allowed by web browsers. I would not be surprised that sending ahead of time replies to a remote peer through a previously established reverse-proxy connection for this peer would not be handled correctly by nginx. – Xavier Lucas Mar 17 '15 at 13:01
  • @XavierLucas That was my thought as well. I'm hoping there is a way to short circuit a request or at least notify the client early to allow it to terminate its connection. – thesmart Mar 17 '15 at 17:09
  • Did you send any response (meaning status and headers) before ending it? That may be what causes NGINX to complain. – Fox Mar 19 '15 at 23:00
  • @Fox Yes. As I say in my question, I called response.end with a payload. – thesmart Mar 20 '15 at 00:50
  • How could you know the file is identical, if you haven't received all of it yet? – kasperd Mar 26 '15 at 08:55
  • @kasperd The client sends a sha1 checksum that we compare against a known path in the filesystem. – thesmart Mar 26 '15 at 18:45
  • 2
    @thesmart You could consider redesigning the protocol such that for small files, the client send the full file and the server receives it all before answering. For large files, the client first ask the server if it has the file, and only send it, if needed. Also, SHA1 is deprecated, you should consider using something stronger. SHA512 is probably a good choice. – kasperd Mar 26 '15 at 19:07
  • @kasperd SHA1 is deprecated for use in encryption, but as a general purpose checksum it is fine. e.g. same as CRC32, but it's not unique enough for image dupe-checking. – thesmart Mar 31 '15 at 01:16

2 Answers2

2

You don't mention what your clients are, however, this sounds like something that you would achieve with an expect header. In essence, the client sets an "Expect" header with a "100-continue" expectation. The client then waits for a 100 Continue response from the server, before sending its request body.

If the server does not want to receive the body, it can respond with a final status, and the client does not send the body.

This process is defined in RFC2616, section 8.2.3

ColtonCat
  • 738
  • 3
  • 7
1

https://forum.nginx.org/read.php?2,254918,254918#msg-254918 mention RFC2616, 8.2.2 also is related:

According to RFC2616, 8.2.2 1 if the request contained a Content-Length and the client (nginx in this case) ceases to transmit the body (due to an error response) the client (nginx) would have to close the connection