0

I am trying to transfer a 6.5 GB file between two Windows Server 2003 machines on our network. Using Robocopy //sourceserver1 //destinationserver2 filname /z is making the file transfer crawl and after three hours of initiating the request the file has only progressed to 26%. I am aware that the '/z' prompt slows down the transfer for Robocopy, but the lag here seems excessive.

I would like to diagnose what is going on in the network and try to figure out possible bottlenecks. Can somebody suggest where I should start with my quest or if I should have transferred the file in an alternative fashion?

Thanks

sc_ray
  • 107
  • 1
  • 6
  • Old question, but it seems that `/Z` really kills your speed due to progress header updates. After each chunk is written it re-writes the progress header on the file. Restarts are great in some circumstances but they are also very slow. – Corey Mar 22 '19 at 05:24

1 Answers1

1

Unless you have something like netflow, you might struggle to see whats taking up your bandwidth.

You can use a /ipg:n switch with your robocopy command to try and throttle the bandwidth back a bit, i have used that before over slow links.

You could try using ftp to move the file instead, install an ftp server where you want to copy the file to, and ftp to it from your source server. Using filezilla or similar you should be able to run it in restartable mode, and also see how much bandwidth it is taking up.

I have struggled sometimes copying large files in the past so i know how you feel!

beakersoft
  • 997
  • 15
  • 29
  • Thanks for the netflow tip. Are there some working examples on using netflow to trackdown the pitfalls in the network infrastructure. The /ipg:n switch might also be helpful. Thanks – sc_ray Jan 18 '11 at 17:50
  • netflow is quite an in depth application, but it will show you what clients/applications/protocols are taking up your bandwidth – beakersoft Jan 18 '11 at 20:03