I'm trying to set up a coldfusion server to accept uploads of large files, and I'm bumping into some limits. Here's what I've observed so far:
There are two settings in ColdFusion Administrator that limit the size of uploads: "Maximum size of post data" and "Request Throttle Memory". If the size of your upload (including HTTP overhead) is bigger than either one of those settings, the upload is rejected. I can't figure out why we need 2 of these; whichever one is set higher doesn't have any effect as far as I can tell. The lower one wins.
When someone tries to upload a file that's too big, they don't get a nice error message. The upload just hangs forever after sending about 1 TCP window worth of data. And it hangs in a really bad way. Even after the client gives up and disconnects, the associated apache thread is still tied up (I can see this using mod_status). The stuck threads just keep building up until there are none left to take new requests and the server has to be restarted.
The "Request Throttle" is a thing I really don't understand. All documentation about it speaks of it as the size of a memory region. If that's what it is, then I can't see how it's related to file sizes. It hints at something I just don't want to believe: that ColdFusion slurps the entire uploaded file into memory before writing any of it to disk. No sane person would do that, when an upload loop (read a medium-sized block, write it to disk, repeat until finished) is so easy. (I know the structure of an HTTP multipart/form-data post makes it a little bit harder but... surely a big company like Adobe with a web development product can get this right... can't they?)
If whole-file slurping is actually what's going on, how do they expect us to choose a workable size limit? Allow a gigabyte and a few simultaneous users can run your server out of memory without even trying. And what are we going to do, not allow gigabyte uploads? People have videos to post and no time for editing them!
ADDTIONAL INFO
Here are some version numbers.
Web server:
Server: Apache/2.2.24 (Win64) mod_jk/1.2.32
ColdFusion:
Server Product ColdFusion
Version ColdFusion 10,285437
Tomcat Version 7.0.23.0
Edition Enterprise
Operating System Windows Server 2008 R2
OS Version 6.1
Update Level /E:/ColdFusion10/cfusion/lib/updates/chf10000011.jar
Adobe Driver Version 4.1 (Build 0001)
ADDTIONAL INFO #2
I don't know why you'd want to know what values I've put in the limit fields, but they were both set to 200 MB for a while. I increased "Maximum size of post data" to 2000 MB and it didn't have an effect. I already figured out that if I increase "Request Throttle Memory" to 2000 MB it will allow a larger upload. What I'm looking for here is not a quick "stuff a bigger number in there!" answer, but an detailed explanation of what these settings actually mean and what implications they have for server memory usage.
Why the server thread stalls forever instead of returning an error message when the limit is exceeded could be separate question. I assumed this would be a well-known problem. Maybe I should ask first if anyone else can reproduce it. I've never seen a "file too large" error message returned to a client from ColdFusion. Is it supposed to have one?
ADDITIONAL INFO #3 Some experimentation has led me to a partial answer. The first thing I was missing was that "Request Throttle Memory" (RTM) does something useful if it is set higher than "Maximum size of post data" (MSOPD). In my first round of tests, with no clue about the relationship between them, I had them the other way around. With my new understanding, I can see that the ratio RTM/MSOPD is the number of simultaneous uploads that will be allowed if they are all near the maximum size.
Assuming that the "Request Throttle Memory" is actually a memory buffer, and not a temporary file, this means that my worst fears were correct. Every file is kept entirely in memory for the full duration of its upload. Nobody has said anything to make me believe otherwise (although I also don't see anyone jumping up to say "yes, they did this stupid thing")
Also with this new understanding, the stalled uploads make some sense. The server doesn't have memory available to accept the upload, so it just doesn't read from the socket. The TCP buffers fill up, window size becomes 0, and the client waits it to open up again, which should happen as soon as the server starts reading the request. But in my case, for some reason, that never happens. The server forgets about the request entirely so it just lingers.
The case of "Maximum size of post data" being hit is still a bit of a mystery. Requests that hit a hard limit shouldn't be queued, just rejected. And I do get a rejection message ("Post Size exceeds the maximum limit 200 MB.") in server.log
. But again in this case the server seems to forget about the request without ever sending an error to the client.