There is no definite ratio on the same machine, and using multiple machines (of different types) can definitely have an impact. Compression and decompression actively involve data storage (e.g., a "hard drive", or "SSD"), processor, and other components like memory.
As an over-generalization, uncompressing is pretty fast, and may even be faster than copying the uncompressed amount of data. Compressing can also be similarly fast, and for something like RLE compression it may be. For zip and gzip, common implementations are slower than decompression, and you can often squeeze out another 5%-15% compression effectiveness if you choose more aggressive compression options that may take 2-4 times as long.
The difference is largely because compression involves some testing (sometimes thought of as "guessing"), and some tests are fruitless. In contrast, decompression is generally just following a pre-established process, so that goes relatively quicker.
1
Some benchmarks. But differences in hardware between source and target machines can make the result vary widely....
– xenoid – 2017-06-22T19:59:53.7131Interesting results, thanks for the link. Most of the machines I'm dealing with have similar hardware, so I can still have an idea. I'm mostly concerned about decompression, so it seems like gzip is the best option for me, with decompression being about 10 times faster than compression. – radschapur – 2017-06-22T21:03:35.857
1I'd expect disk I/O to be the bottleneck in both processes. Writing tends to be faster than reading, because buffering means the writer doesn't have to wait for the disk. – Barmar – 2017-06-23T00:40:05.737