5

We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach?


We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined.

We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive.


We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this.

We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing.

But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach:

  • We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well.

  • We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share.

  • Sites will no longer be able to access their content if there's ever an Active Directory problem.

  • In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers.


On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content.

This thread is pretty interesting, though some of these people are working at a much larger scale.

I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share.

What do you think?

tombull89
  • 2,958
  • 8
  • 39
  • 52
Richard Beier
  • 389
  • 3
  • 10
  • 17

4 Answers4

4

Your concerns are valid and at the end of the day you will either have to assess each reward with its inherit risks.

Shared Content is great; but as you pointed out you then have a dependency on a remote host, and clustering storage technologies isn't cheap or simple. This type of setup has its place, and given your current solution I assume you aren't looking for a 99.999% uptime solution.

Have you thought about extending your script to disable load balanced nodes (at the firewall) during times when you sync content from Web1 > Web2?

Shared Config is great though, and uses a local cache copy if your UNC share is unavailable, and is a great way of ensuring web apps have the correct configuration.

My 2c's

commandbreak
  • 969
  • 4
  • 6
  • Thanks commandbreak. Your're right, we're a small company and we aren't looking for five nines - just higher reliability than we currently have. Unfortunately there's no way to control our firewall programmatically. But we could take the web server down on WEB2 during the sync and restart it after. The firewall should detect that it's not accepting connections and direct all traffic to WEB1 while it's down. Good to know that shared config uses a local cache - that's one less thing to worry about... – Richard Beier Mar 24 '10 at 18:19
  • Like @penra shared in that answer, if SMBv1/v2 is used connection exhaustion can be a critical issue of shared content on a remote host, than local content. That's what this answer missed. – Lex Li Jul 07 '18 at 14:17
1

Shared disk on a SAN is the best solution - however only a little more realistic than flying on a broom. Some vendors like Melio offer it - but some big downsides (expensive, need a SAN, if on VMware lose ability to snapshot w/ shared controller)

Duane
  • 11
  • 1
1

We have been using UNC shares to serve clustered (NLB) webheads for some time and recently moved to 64 bit OS Win 2008R2, the difference is huge, no more concerns about exhausting SMB connections. There are limits but they are high. I've just added DFS into the mix with another file server but I can't speak to reliability yet.

penra
  • 11
  • 1
  • Note to future readers, later versions of Windows support SMBv3, which can further improve performance. – Lex Li Jul 07 '18 at 14:15
0

Use iSCSI on a shared disk file system... UNC shares are too slow