2
0
I am timing some code and I would like to tell how much of the time taken is due to reading the data in from disk. I don't believe the result that time
gives me. For example, I have a 1.3GB file and if I run wc
I get
time wc largefile.file
50000000 150000000 1316665179 largefile.file
real 0m26.835s
user 0m18.363s
sys 0m0.495s
It can't possibly have taken < 0.5 seconds to read in the file from my old hard drive.
Is there a reliable way to tell how much of the time was due to I/O?
Further details for why I don't see how to interpret time
. If I do
time cat largefile.file > /dev/null
real 0m24.230s
user 0m0.060s
sys 0m1.473s
then it is tempting to say that about 22.5 seconds are spent on I/O. But the wc
figure from above implies that it is 8 seconds. These two figures are not consistent.
1Better redo the 2 measurements while rebooting before each one. If the file is even partially in memory then the measurement is false. – harrymc – 2014-05-16T13:21:03.983
@harrymc I just did
sync && sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches'
first and get the same result. It's not a caching effect as the overall time is the same. – Lembik – 2014-05-16T13:29:47.157sync doesn't clear the memory cache - it just ensures that blocks marked as dirty are written to the disk. – harrymc – 2014-05-16T16:44:16.017
@harrymc OK but if you don't do the sync the timing is about 0 seconds. – Lembik – 2014-05-17T18:01:31.710
Because Linux uses the memory cache very effectively, one must be very careful in measuring. – harrymc – 2014-05-17T21:00:42.810