0

Currently I'm running a number of ASP.NET applications on a single dedicated Windows 2012 Standard R2 server with a hardware firewall in front. If this server goes down, my applications cannot be used, so that's a big risk. So I would like to improve reliability by removing any single point of failure.

My webhost has suggested 2 possible options:

Option 1: Using 2x Windows 2012 Standard R2 servers, 2x firewall, 2x load balancer and configure this in a failover mode (active-passive) including using DFS for IIS and file replication. The servers contain 2 SSD disks each in RAID1 mode.

Option 2: Using a virtual Windows 2012 Standard R2 server and a virtual router/firewall in a private cloud they host using Apache Cloudstack with a NetApp storage platform in a RAID60 configuration

Two questions:

  1. Is option 1 even possible (and reliable) with DFS installed only on those 2 servers, or do I need additional servers for controlling DFS?
  2. Which option would you choose if you keep reliability and performance in mind? Costs are similar, so that's no concern.

1 Answers1

0

Option 1 is possible with additional servers, but I wouldn't consider using DFS for it. More likely is to use a DAS array / SAN or even some replication to make your application files accessible from both machines. You would then load balance the front end, with a load balancer that itself should be protected from failure by using redundant components or running 2 in a Active/Active or Active/Passive configuration.

I would however choose option 2. It is the most common way to provide protection from server component failure to a non cluster aware application. Performance with modern hypervisors is nearly the same / extremely close to running on bare metal and a good solid storage backend will have no trouble providing the IOPS and redundancy for all manner of workloads.

If your ASP.NET applications are database heavy, be sure to ask questions about storage IO performance before committing. The performance of that RAID60 will all depend about what disks, controllers and storage fabric is in use.

tomstephens89
  • 981
  • 1
  • 11
  • 23
  • Regarding option 1, indeed the idea is to use 2 load balancers to prevent SPOF. What's the reason you wouldn't use DFS replication in this scenario? – Martin de Ruiter Aug 13 '15 at 22:38
  • Also, my applications are indeed database heavy, what kind of storage IO performance numbers should I be looking for? My current SSD 480GB disks provide a good performance for me. – Martin de Ruiter Aug 13 '15 at 22:39
  • @MartindeRuiter DFS in my experience is used for organising a bunch of distributed SMB shares into a single namespace. Think multiple locations, multiple data centers, one unified tree structure. It is client/server file share technology and I personally have never seen it used to application availability requirements, only for large file shares where servers are distributed across the country. If you are using SSD's now then excellent. What I meant was that you ask your host for the specifications or their RAID60 backend. It might be 7.2k slow disks for capacity... Or something faster. – tomstephens89 Aug 14 '15 at 06:26