1
I am trying to understand how it is that the write-back cache on a raid controller can continue to yield a benefit while you're writing a huge random dataset. Let me first say that I understand how a write-back cache works: the OS gets an I/O completion on a write when the data hits the controller cache rather than the slower underlying medium, and the controller then writes the data to the medium as fast as it can be written. So with that being the case, if you're very quickly writing data faster than the backing medium can write it, I would expect to blow the controller cache and the performance gain of having a write-back to go away, and you'd end up with performance equivalent to a write-through setup?
What I described above is what I'd expect, but it's not what I'm seeing. The write-back cache backed server consistently has at least 4x higher IOPS and throughput, and sustains it; as compared to an identical server that I put into write-through mode. Any ideas on how to explain this behavior?
And yes I am writing way more than enough data to saturate the filesystem cache, and I'm writing it very quickly.
That somewhat makes sense. But take a simple example cat /dev/zero > /mnt/a
I would expect the cache to either blow like i said, or you'd get a ton of time spent in wait i/o. I'm not seeing this, so I think I'm still missing something. – Vitalydotn – 2015-12-08T23:38:58.097
cat
is designed for displaying text on screen, it is optimized for quick response time, not for high IO performance. You should really usedd if=/dev/zero of=/mnt/a bs=1M
. – Dmitry Grigoryev – 2015-12-09T06:52:20.290