98

I am uploading a 26Gb file, but I am getting:

413 Request Entity Too Large

I know, this is related to client_max_body_size, so I have this parameter set to 30000M.

  location /supercap {
    root  /media/ss/synology_office/server_Seq-Cap/;
    index index.html;
    proxy_pass  http://api/supercap;
  }

  location /supercap/pipe {
    client_max_body_size 30000M;
    client_body_buffer_size 200000k;
    proxy_pass  http://api/supercap/pipe;
    client_body_temp_path /media/ss/synology_office/server_Seq-Cap/tmp_nginx;
  }

But I still get this error when the whole file has been uploaded.

user2979409
  • 1,081
  • 1
  • 7
  • 4

4 Answers4

123

Modify NGINX Configuration File

sudo nano /etc/nginx/nginx.conf

Search for this variable: client_max_body_size. If you find it, just increase its size to 100M, for example. If it doesn’t exist, then you can add it inside and at the end of http

client_max_body_size 100M;

Test your nginx config changes.

sudo service nginx configtest

Restart nginx to apply the changes.

sudo service nginx restart

Modify PHP.ini File for Upload Limits

It’s not needed on all configurations, but you may also have to modify the PHP upload settings as well to ensure that nothing is going out of limit by php configurations.

If you are using PHP5-FPM use following command,

sudo nano /etc/php5/fpm/php.ini

If you are using PHP7.0-FPM use following command,

sudo nano /etc/php/7.0/fpm/php.ini

Now find following directives one by one

upload_max_filesize
post_max_size

and increase its limit to 100M, by default they are 8M and 2M.

upload_max_filesize = 100M
post_max_size = 100M

Finally save it and restart PHP.

PHP5-FPM users use this,

sudo service php5-fpm restart

PHP7.0-FPM users use this,

sudo service php7.0-fpm restart

It will work fine !!!

Sukhjinder Singh
  • 1,944
  • 2
  • 8
  • 17
  • 1
    why 100Mb for client_max_body_size in nginx.conf? – user2979409 Nov 14 '16 at 12:10
  • you can put any value according your requirements but 100 mb is sufficient – Sukhjinder Singh Nov 14 '16 at 12:14
  • 3
    but I am uploading up to 30Gb files. – user2979409 Nov 14 '16 at 12:19
  • find a file of maximum size and allow it more than that file – Sukhjinder Singh Nov 14 '16 at 12:21
  • so why client_max_body_size does not work inside my /location ? – user2979409 Nov 14 '16 at 12:23
  • try this one http://serverfault.com/a/304465/384531 – Sukhjinder Singh Nov 14 '16 at 12:25
  • You can put it to 0 to skip max_bodt_size checks, you can also put it in location but maybe that conflict if you have it in server also ( not sure ) but always you can review documetation about that: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size also be sure than your client_body_temp_path have enough space. – Skamasle Nov 14 '16 at 21:31
  • This worked fine for me. I am marking this answer up (it had -1, no idea why; it's now back to 0). Thanks for this. Specifically "If it doesn’t exist, then you can add it inside and at the end of http" — did the trick nicely. Used this to increase upload size on a MAMP PRO 4 Nginx installation. – inspirednz Jun 09 '17 at 01:37
  • Yes, I don't understand why but I had problems until I moved it at the very end of the `http` block. – lapo Jan 25 '19 at 19:34
28

If you're uploading files of that size you should probably just disable the body size check altogether with:

client_max_body_size 0;
devius
  • 447
  • 5
  • 15
8

With respect, I'm not sure why you're using http to transfer that much data. I tend to do my large transfers over ssh

//such as:
tar cjf - /path/to/stuff | ssh user@remote-host "cd /path/to/remote/stuff;tar xjf -"

...which gives me a bzip-compressed transfer. But if I needed to do a resumable transfer, I might use sftp, lftp, even rsync. Any of those (or their derivatives or siblings) is capable of

  1. employing an encrypted channel if desired,
  2. resuming an interrrupted transfer and
  3. compressing the transfer

Only one of those would be an option to you when attempting to upload over http (namely, #1 if you were on https).

I hope you'll look into any of the above or the several other alternatives.

Matt Murphy
  • 233
  • 2
  • 3
  • 4
    This has like nothing to do with the question. – Zenklys Nov 07 '18 at 10:10
  • 5
    It's only there because the guy is transferring Blueray-scale files over http. Without having more detail of why someone would want to do this, I judge that guy asking the question would be best served by not successfully answering the question. I acknowledge that my ignorance of those details is a problem. – Matt Murphy Nov 09 '18 at 20:56
  • Are you sure about 2 and 3? – Tom Feb 11 '21 at 11:17
  • @tom sftp/lftp/rsync ... yeah I'm pretty sure those resume and compress. But if you're looking at my tar pipe to ssh, yeah that doesn't resume, just compress. – Matt Murphy Oct 01 '21 at 16:02
1

Add the following line to http or server or location context to increase the size limit in nginx.conf, enter:

client_max_body_size 100M;

Source: https://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/

The client_max_body_size directive can be added in http, server or location.

Note that the max body size will typically be 1.34x larger than the file being uploaded. For example, the above limit 100M will only allow a 74M file to be uploaded.

Pothi Kalimuthu
  • 5,734
  • 2
  • 24
  • 37
700 Software
  • 2,163
  • 9
  • 47
  • 77