1

I'm trying to debug some failed HTTP POST requests containing large file uploads (~500 MB). The end-user is receiving strange HTTP responses that are not being logged in either varnish's varnishncsa facility, varnish's varnishlog facility, or any of the internal webserver logs. Since I'm not able to duplicate the issue (it may be a result of an ISP's proxy) I'm trying to find some options for logging the entire request (identified by URL) and replaying it later on a development or staging server.

It looks like Snort may be the best way to approach the problem as it allows us to log the incoming packets before they are (mis)interpreted, but I'm concerned that it may introduce a significant latency, memory overhead, or other unforeseen issues with such large requests. All the matching we'll need to do will be based on the URL, so only the first kb or so will actually need to be filtered, but we need the rest of the request in order to replay.

I'm looking at readme.stream5 in the snort docs, which makes this appear possible and reasonable, but the gap between docs and real-world can be fairly wide.

Is Snort well-suited to this task? Are there any optimizations which I can apply in order to avoid excessive memory, disk, or processor overhead? If you do not believe Snort is well-suited to this task, what approach would you suggest?

The server installation is globally distributed on headless boxes running a recent Linux setup, so any solution must be scriptable, automated, and able to report back to me by some means that it has captured requests.

0 Answers0