0

I'm working as an application developer for a company that I've been with for for some time. The business would like to mirror their existing windows server (server A) to a new server (server B) with the main aim of providing redundancy where either server is unavailable. A hardware load balancer will manage direction of the traffic to either server. It is intended that only one server be receiving traffic at any given time. On each server, all resources required to run all applications are on the one machine.

Data that will require two way synchronisation across the two machines is:

  • Several MySQL Databases.
  • Two SQL Server Databases.
  • Source code for a .NET application and a classic ASP website.
  • Email Server. Files.
  • Other applications, back up routines etc.
  • Server settings (if possible).

The co-location hosting company is on board who have ideas for load balancing and replication, but I've been asked to manage this.

I have some general concerns about this, being:

  1. This is not the most structured company I've worked in process wise (Little source control experience in the company, free access to the live server of which the configuration is under regular change, no documentation etc.). This may require more technical discipline than is currently present.

  2. None of the applications are cluster aware. Most DB operations are of a non transactional nature.

My specific questions are:

  • Is a two server fail over configuration as I've described here common place? Any pros / cons?

  • What are the potential pit falls with two way data replication, where both database servers need to be both publisher and subscriber? How can I audit for risks regarding data concurrency?

  • Are their any tools for duplicating server software installs / settings? (Imaging is probably out of the question as the two servers are different hardware and spec). I imagine keeping os settings / db schemas / source code / version control server / email server etc etc etc could be a big overhead?

  • Given my afore mentioned concerns, should I be advising that we slow down on this until we can bring better systems management and address any potential application weaknesses before going ahead, or do you think necessity will be the best driver of these changes once the second server is in play?

Sorry if this is a bit of a ramble. Any insight into how to manage this or comments along the lines of "you should leave this to someone who knows what they are doing" are welcome!

gb2d
  • 249
  • 4
  • 14

1 Answers1

2

This question is too big; you'll likely not get specific answers on it.

In order to have "multi-master" replication, meaning both servers responding, you need to solve that problem for each protocol separately (SQL, SMB, HTTP, etc.). A much easier route is to only use one server at a time in a active-passive scenario but you're still talking a highly complex solution to ensure zero data loss across all those apps and protocols.

The easiest-but-not-likely-the-best solution is have it be a virtual machine on a failover host cluster, but if you have unplanned failover I believe you'll loose what's in memory while the virtual machine reboots.

Windows File Services, IIS, and SQL Server all have their various levels of redundancy but each one is unique and would need to be evaluated based on your specific app needs.

If it was me I would base this decision on up time and importance of the data. If you can stand the virtual machine cluster active-passive scenario, then do that. I think trying to make it a multi-master with all those different technologies on a single box is a nightmare. I know no one that does that. They/we have SQL servers doing mirroring, and email doing it's own thing on separate servers, etc. Every app you add is it's own "solution" for high-availaibilty (to me).

Bret Fisher
  • 3,963
  • 2
  • 20
  • 25
  • That's ok, it was this sort of answer I was looking for "I know no one that does that." says most of it. – gb2d Feb 10 '12 at 15:14