0

The scp man page shows that the -C flag can be used for compressing a file on the fly while performing a remote copy -

$ man scp | grep "\-C"
     -C      Compression enable.  Passes the -C flag to ssh(1) to enable compression.

However, when I perform scp with and without the flag, the file size remains the same -

Without the flag:

$ scp root@remote-host:/path/to/file/* .
Password:
core_dump                                                                100% 7832MB 110.6MB/s   01:10

$ ls -lh
total 7.7G
-rw------- 1 user group 7.7G Jan  4 16:19 core_dump

With the compression flag:

$ scp -C root@remote-host:/path/to/file/* .
Password:
core_dump                                                                100% 7832MB  69.8MB/s   01:52
$ ls -lh
total 7.7G
-rw------- 1 user group 7.7G Jan  4 16:21 core_dump

I've tried several other options but all yield the same result:

$ scp -o Compression=yes root@remote-host:/path/to/file/* .

$ scp -C -o Compression=yes root@remote-host:/path/to/file/* .

$ scp -C -o Compression=yes -o CompressionLevel=9 root@remote-host:/path/to/file/* .

Is there something that I'm missing here?

Anish Sana
  • 123
  • 6

2 Answers2

1

ssh -C is compression for data in motion on the wire, not data at rest. In other words, ssh compressed and decompressed in the transport layer.

With ssh and pipes, its possible to compress on the remote host, get that compressed stream as standard in, and write it out on the local host. Which compresses data in motion and at rest.

ssh root@remote-host "zstd /path/to/file/core_dump --stdout" > core_dump.zst

Quotes are significant, indicating the remote command. Replace zstd with gzip or xz for your desired compression format.

Also possible to pipe tar archives through ssh, not necessary here with a single file.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
  • This makes sense. I sort of had the feeling that the data was being compressed and decompressed, mainly because it was taking longer to `scp` with the `-C` flag, but couldn’t find any concrete information on that. Thanks! – Anish Sana Jan 05 '22 at 04:51
  • 1
    Yes. With sufficiently large files, the limiting factor is how fast ssh can decompress, or even the write speed of storage. With sending compressed streams there is no decompression step. Total bytes transferred and written is reduced by the compression ratio. And zstd is fast and multi-threaded. – John Mahowald Jan 05 '22 at 15:08
0

In ssh, it allows gzip compression (under the scp).

This will speed things up on sluggish connections; but, on any decently fast connection (100Mbit or faster), the compression will almost certainly slow things down.

It will be more or less efficient than zip depending on whether gzip (particularly gzip -6) is more or less efficient than the compression level you choose in zip.