3

I have two identical servers running Windows 2008 R2. I am using DFS Replication to keep a few shares in sync between the two servers. Each server is a Domain Controller and each server can function in the event the other becomes unavailable including DHCP, DNS, and a number of other things we rely on.

We also have a remote workstation that creates a series of output files and places them up on a share using DFS namespace. It's essentially for backup purposes and reporting.

I have a batch job I can schedule to run each night that examines these files, does a number of things with them and then deletes backup files older than 30 days (if not processed) as one of the final tasks. It's a single script.

I would like this task to run only once on one of the servers but then run on the other server if the "preferred" server is unavailable. Is this capability possible? Can two or more WIN2K8 servers handle this situation?

I could run the job on both servers each night as it's reentrant safe but if I did so, I would probably cause DFS replication headaches; the share in question is in a full mesh configuration.

If not possible any clever tricks to make this work or are there any 3rd party tools.

One idea is to simply run the job from the workstation where the data files are created and copied to the servers. The only problem in doing this is that it's a user's workstation and they sometimes forget and power off the machine at night preventing the daily reports from getting processed.

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
Kilo
  • 1,554
  • 13
  • 21

1 Answers1

2

Yes.

Create the job to reference the relevant DFS namespace instead of a specific server. It will be deleted off of which ever server is the active DFS target from the machine its run on (if you run it on one of the servers, obviously that server should target itself), and the deletion will replicate to the other via DFS.

So instead of deleting, say C:\somefolder\somefile.foo, you'd delete \\dfsnamespaceforsomefolder\somefile.foo

Of course, if you run this on one of the DFS servers, and it becomes unavailable, the task probably won't be able to successfully kick off on the other DFS server, on account of the computer that's running the task being unavailable. A hacky workaround to that would be to have your script on one of the servers check to see if the other server is available before executing, and set this task to run after the first one. If the other server's available, break, if unavailable, run the script.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
  • Hopeless: My thoughts exactly. I reckoned there has to be a cleaner solution though? One of the problems I forsee is the attempting to fence the correct server to allow the scheduled task to complete on which sever (assuming if I launch it on both servers at same time). My other thought was to just stagger the jobs in time so that the DFS replication is not trying to resolve concurrent changes to the replicated space. – Kilo Aug 10 '12 at 00:51
  • @Kilo You don't have a third server you can run it from? That would make life super simple. You could just target the namespace, and the script would (should) select any available replica if the default target is unavailable. – HopelessN00b Aug 10 '12 at 00:59