There seems to be an issue with serving large 10+GB files with byte-range requests on our RHEL5 64-bit server. The issue that I am noticing is that range requests are timing out for ranges that are crossing the 10GB (ten gigabyte) mark, whereas range requests at any point in the file before this are fast.
I have been testing on tomcat with a java servlet and apache HTTPD (2.2) and it fails on both
If I setup a test that performs curl range requests on every range up to 10GB limit, then it fails exactly at 10GB (or maybe more precisely, 9.313231GB ?)
$ curl -w '%{time_total}\n' --header 'Range: bytes=9999900000-10000000000' http://ourserver.com/bam/file.bam --fail --silent --show-error
22.840
curl: (56) Recv failure: Operation timed out
It is at this point that the range requests always fail. Any range request before 10GB in the file it is very fast and has no errors.
I guess the fact that it happens in both tomcat and apache makes me suspicious that it isn't an apache or tomcat configuration issue, so are there any clues about what could be happening?
Script for testing
#!/bin/bash
set -evx
begin=9900000000
grab=100000
iter=100000
max=16000000000
url=$1
for i in `seq -f "%.0f" $begin $iter $max`; do
i2=$(($i+$grab))
echo -en "$i\t$i2\t";
curl -so /dev/null -w "%{time_total}\n" --header "Range: bytes=$i-$i2" $url --fail --silent --show-error
done;
picture of all the range requests up to 10GB, where the 20 seconds indicates timeout