It depends heavily on what it costs by means of time to retrieve two disk sectors vs retrieving one sector and decompressing into two data sectors.
On a HDD, the biggest delay is related to disk revolution, thus defragmenting disk (or single file) will speed sequential read greatly.
For SSD, there is no seek delay, so defragmenting has a smaller effect (but still make pointers to file parts more compact in MFT, thus needing one extra read for fragment vs per whole file).
Example: I run OpenBSD off the CF card and kernel is 20 MB, which reads in 10 seconds. I compress it and it becomes like 6 MB, and reads in 3 seconds, one more second for decompression. In this embedded case, I save 5 seoconds off the boot sequence, so the read-only files can be compressed for good effect. (for example, a PXE bootable installer would be a good candidate).
A bigger problem turns up when you compress system databases like Windows Update or SQL Server, where big parts of file gets recompressed (that is, read, decompress, modify, recompress, and write) leading to ugly speed and enormous fragmentation.
At the price of modern disks, I'd suggest to buy quicker disk for speed locally, and limit compression to strictly read-only scenarios like netboot or install DVDs.
3good question, the answer is that it depends! Depends on the data being compressed mostly, if its highly compressible then compressing may speed things up, "may" because there are other factores like is this data transactional etc.. – user33788 – 2010-06-25T16:53:35.303
1
I'm seeing interesting things (http://hardforum.com/showthread.php?t=1520475) for SSDs, an actual test for this would be awesome, I hope that I can find one...
– Tamara Wijsman – 2012-04-24T07:57:04.333