1

I have a primary and a backup windows 2008 server, and a bunch of windows XP clients that map a drive to a share on the primary server. If the primary server goes down, I want those client machines to automatically re-map their drive to the backup server, so they can continue to access the files.

Should I try to write a vbscript or python script or something that detects if the primary server is down and issues the appropriate "net use m: \server\share ..." but I need that script to run every minute, no matter who is logged in. Can I do that with windows "scheduled tasks" ?

I'm a Unix guy, and could use any tips you have to offer on accomplishing this. Is there a better solution?

many thanks, -Ian

SamErde
  • 3,324
  • 3
  • 23
  • 42

5 Answers5

5

Try DFS. Here's an article I found that goes into detail.

Update 7-12-2016: Since the above URL is broken, here's Microsoft's own page on DFS: https://msdn.microsoft.com/fr-fr/library/cc782417(v=ws.10).aspx

Mark Allen
  • 474
  • 3
  • 8
  • thanks, but for DFS if the DFS server goes down, the rest are inaccessible. I only have two servers to work with, so I can't use that option unfortunately. –  Dec 08 '09 at 22:10
  • 3
    no DFS has redundancy & folders can be replicated. It matches your use case – Nick Kavadias Dec 08 '09 at 22:27
  • 2
    Mark and Nick are correct; you want to have multiple namespace servers to handle the requests for the DFS share, and then multiple folder targets with the same data replicated using DFS-R, for redundancy. If you want an Active/Passive setup, set the second folder target to be "last among all targets" within DFS management. – Jeff Miles Dec 08 '09 at 22:32
  • Thanks for the information. It seems that to support DFS failover, I need admin access to the Active Directory server. In my case my university does have an Active Directory server, but I don't think they'll let my servers join their domain or anything like that. It looks like I would also need to assign an IP address for the cluster, which I don't think is possible because the servers are on different IP subnets, and IP packets destined for one segment cannot be re-routed for the other subnet without admin access to the routers, which I do not have. Please let me know if I'm incorrect! –  Dec 08 '09 at 23:12
  • See my revised answer for your options given the additional details. DFS and clustering seem to be out of the realm of possibility – Jim B Dec 09 '09 at 04:16
  • The URL is invalid now. Try this instead (without pictures though): http://web.archive.org/web/20100122013226/http://help.globalscape.com/help/availl/Using_Microsoft_DFS_for_Failover.htm – wandersick Jul 08 '16 at 04:50
2

DFS can most certainly give you a degree of high availability as well as other features however if all you want to do is set up a redundant flie server cluster see this technet step by step article

You cannot do domain DFS without access to the domain. You cannot set up a cluster without access to AD. Another option is to:

  1. write a script to replicate the files manually at a given interval on the server (or if you think you are up to it on folder change) and write a script that the users can click on if they have problems.

  2. You should mark the shared folders as available for offline access and then the xp systems (when they make it available offline) will cache and catch up to the server should it become unavailable. Once marked available offline the copy is just in case the primary server actually dies and cannot be brought back up, you then have a backup copy.

Jim B
  • 23,938
  • 4
  • 35
  • 58
0

The "automatically remap" functionality is going to be the thorn in your side. You won't ever get that to work.

Either investigate third-party failover software, or cobble together your own solution server-side (see below). Doing it client-side is just asking for difficulty, though.

You might consider a script or manual procedure to add an alias name to the standby server computer that allows it to answer for the failed server. You'll have to restart the "Server" service on the standby computer (and update DNS / WINS, as necessary) to get it to start answering for that name. Clients will have the name to IP mapping cached locally, too, so you may want to consider assigning the failed server computer's IP address to the standby server as part of that procedure. (Even then, clients will have the MAC to IP mapping cached in their ARP caches, so unless you also assign the failed server's MAC address to the standby server's NIC you're not going to get instantaneous failover.)

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • Evan, thanks for the ideas. Unfortunately my university doesn't support dns/wins failover, and IP failover isn't an option either because we have a routed network and the two machines are on different IP subnets. So that knocks out all of the third-party failover solutions I know about... that's why this is such a challenge! –  Dec 08 '09 at 22:12
0

if you can keep the contents of both these file servers in sync, you might consider using a CNAME and just point it to the server that's avbl.

a little long shot - but also try assigning the dns record to both servers, while assigning a lower priority to the failover server.

all of the above is DNS based.

hth's

Home Boy
  • 62
  • 4
  • There is no priority value for A or CNAME DNS-records. Even if there were, you have no way to force clients to use the primary server when both are online. It would only work in a perfect world. – Martijn Heemels Nov 10 '11 at 13:36
0

This may be a bit simplistic, but how about this:

If you have good communication with your users, enough to explain what's going on and how a failover will be handled, you could do a manual failover.

On the backup server, create the backup folder and name it something inutitive like Backup files. Map a drive on the users' PCs to the backup drive. Run a script on the backup server, or I like to use robocopy, to fetch the files from the primary. Make it clear to the users that the backup files are over-written so if they do modify them they will lose changes.

When something goes wrong declare a failover, stop the fetch script on the backup server and get the users to use their drive mapping to the backup folder.

When all is well, copy changes back to the primary (again, robocopy does this quite well) and declare a fallback with the users going back to the primary.

Not automatic so a bit rubbish, but should work manually with minimum effort. I can't think of anything automatic hat hasn't been suggested.