81
12
I wanted to backup a path from a computer in my network to another computer in the same network over a 100 Mbit/s line. For this I did
dd if=/local/path of=/remote/path/in/local/network/backup.img
which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did
dd if=/local/path | gzip > /remote/path/in/local/network/backup.img.gz
But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files, and it was always the same.
Why does piping dd
through gzip
also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but I am just wondering. ;)
The simple answer is that
dd
is outputting at 1MB/s... right into the waitinggzip
pipe. It's got very little to do with block size. – Tullo_x86 – 2016-10-21T04:59:32.8231512 bytes was the standard block size for file storage in early Unix. Since everything is a file in Unix/Linux, it became the default for just about everything. Newer versions of most utilities have increased that but not dd. – DocSalvager – 2014-06-05T20:04:41.660