Transferring large files using scp with CPU and memory considerations

3

I want to transfer an arbitrarily large file (say >20GB) between 2 servers. I have several considerations:

  • Must use port 22 (ssh) because of firewall restrictions

  • Cannot tax the CPU (production server)

  • Memory efficiency

  • Would prefer a checksum check but that could be done manually

  • Time is not of the essence

I would appreciate an answer for several scenarios:

  1. Server A and Server B are on the same private network (sharing a switch) and data security is not a concern

  2. Server A and Server B are not on the same network and transfer will be via the public internet so data security is a concern

My first thought was using nice on an scp command with a non-CPU-intensive cypher (blowfish?). But I thought I'll refer to the SU community for recommendations.

Belmin Fernandez

Posted 2010-10-05T00:58:18.400

Reputation: 2 691

Are you absolutely sure you can't use FTP, Samba (if their windows), or other faster file transfer methods? – TheLQ – 2010-10-05T01:26:40.307

In this scenario, we only have the SSH service available. – Belmin Fernandez – 2010-10-05T01:50:12.910

Answers

2

scp should work fine. In an internet environment the overall speed will usually be determined more by the network than the encryption done by the scp program. On the private network your plan to use blowfish to ease the CPU load a bit is good. Personally I would not use the nice command unless your production CPU load is already high. Most servers are IO limited, not CPU limited - but you know your system better than I do. And definitely do an md5 or sha256 checksum on the result.

hotei

Posted 2010-10-05T00:58:18.400

Reputation: 3 645

Thanks. Was thinking there might be a better solution but this sounds good to me. – Belmin Fernandez – 2010-10-06T00:50:02.253