0

I am trying to copy a shared folder which its content (folders and files) are created by a third party app at no specified intervals. I need to copy these files onto two different server shares. The way I have set it up is to run two different DOS Batch files on a Server start up. Today I had a problem where both stopped working. I wanted to know if there is a more efficient way to copy these files WITH the monitor option on two different shared folders.

I have the following options set: /mon:1 /mot:15 /r:2000 /z /LOG+:\\MyPC\share\BACKUPLOG.txt /TEE

In the batch file I had the following line at the begin gin so it could work otherwise it just displayed a loop of the command and never really ran: cd %windir%\system32\

Rick
  • 107
  • 3
  • 2
    Why not use a Distributed File Share arrangement? That way copying of data from one folder to the others would happen automatically. See https://en.wikipedia.org/wiki/Distributed_File_System_(Microsoft)#DFS_replication for more details. – George Erhard Feb 28 '13 at 16:56
  • We haven't setup DFS because on the server that is creating the files we don't have much control over and is on another domain (cloud). We are using Robocopy to copy from this server to one of our 2008 R2 VM server and on a PC. – Rick Mar 02 '13 at 11:34

1 Answers1

2

I would use George's suggestion and do DFS-R. It's far more efficient for this kinda stuff.. But if like me, you insist on using Robocopy, you might want to just add a single pass to the task scheduler and run it at a normal interval. (or at login which is available in the task scheduler as well). This way you aren't creating a console window, that if interupted, will totally hose up your copy process. In adding it to the scheduler, you can have it simply copy over the new updates. And the machine/VM doesn't have to be logged in to run if you save the credentials to the task (advanced properties of the task)

The issues I have run into with the /MON option is that often the app writing the data doesn't finish in time before robocopy gets greedy and begins to copy it. Of course then it will have to copy it again once the writing is completed. Running it once and then exiting "mostly" solved the problem for me. The contention can still happen during the run, but at least you aren't creating a race condition.

Set the task scheduler to say 5 mins and have it repeat for every 5 minutes for 24 hours, every day. But of course this adds that extra load on the source server of having to scan the folder during each pass to see what has changed. YMMV

MikeAWood
  • 2,566
  • 1
  • 12
  • 13