-2

Hello, I'm migrating a server here is the detail of both server,

Copy from Location

Current Server have 450GB of Data in home drive, which is critical and must be copied.Current Server have 7200RPM SATA with RAID0 and transfer rate is 24-27Mb/s in 1000K lan(Giga bite lan),6GB RAM and processor is Dual core Xeon, with CentOS 5.8

Copy to Location

New Server is Equipped with 15000K SSD with RAID0, 12GB RAM, Quad Core Xeon, with CentOS 5.8

I'm using scp it is taking days, since 3 days only 290GB data copied to New server. please suggest an open source snapshot tool that copies whole data with all file permissions faster and accurately.

Thanks.

vautee
  • 470
  • 3
  • 11
Shoaib
  • 1
  • 1
  • 4
  • By "new location" do you mean "I'm transferring all my data over the public internet"? If so your problem is almost certainly your connection between the two sites, not your tools. – voretaq7 Aug 28 '13 at 15:53
  • It is on the same network. – Shoaib Aug 29 '13 at 04:11
  • Then you need to start giving more details on the troubleshooting you've done thus far, and the environment (especially network infrastructure) so we can actually address the problem. Start with the items [in MadHatter's answer](http://serverfault.com/a/534374/32986), and see [this meta topic](http://meta.serverfault.com/questions/3608/how-can-i-ask-better-questions-on-server-fault) for some additional tips... – voretaq7 Aug 29 '13 at 04:33

2 Answers2

5

Try rsync. It can operate over ssh and the initial transfer might be about as slow as with scp, but the sub-sequent runs are much, much faster, since rsync only transfers the files that have changed since the last run. We're talking about the difference of "the backup takes days" to "backup takes only (tens of) minutes".

By the way, if the data is important, why RAID0 which is very, very dangerous? One disk dies, your data is gone. And what is 15000K SSD, SSD is not a rotational disk.

Janne Pikkarainen
  • 31,454
  • 4
  • 56
  • 78
3

Everything Janne says in his answer is a great idea, but there are a few things it might be worth checking first; specifically, that the network path between the two servers is unblocked.

What does your netstat -in output say? Are there TX or RX errors on the two NICs involved (one or two are OK, but literally, anything over about 10 is cause for concern)? If so, there may be duplex issues; you will need to work with your network admin to sort those out.

Can you test simple throughput with (eg) nc? If you open up a listener on one server with nc -l 12345 > /dev/null then throw ten gig of data at it from the other, with dd if=/dev/zero bs=1000k count=10000 | time nc a.b.c.d 12345 (where a.b.c.d is the ip address of the listening server), how long does that take? For a 100Mb network, that should take on the order of 850s (about 15 mins); for 1Gb or 10Gb networks, divide accordingly. If it's much slower than that, again, you must suspect an issue at a lower level than the protocol.

If there are fundamental networking issues on the path, no protocol on earth will make things better.

Edit: I've added to this in the light of voretaq7's comment above, that the machines might be at different sites. If that's so, then you might have no problem at all other than limited bandwidth between the sites.

An example: I have two servers, at sites connected to the internet by 4Mb/s connections, where the powers that be wanted to copy a 1.5TB production database from one site to the other. I pointed out that, even if they got perfect connectivity between the two sites, and neither internet connection was used for anything other than this copy (both ropey assumptions), it would take about 35 days to copy the data. They agreed to buy me a fast desktop USB RAID box, and I moved the data by London Underground.

If this is the issue, there is no fix except to pay for more bandwidth (or do it via sneakernet/tubenet, as I did). If the two machines are directly connected via a LAN, this caveat does not apply.

MadHatter
  • 78,442
  • 20
  • 178
  • 229