0

Our Systems team is implementing Double Take on client machines for live replication to DR servers. Developer's desktops typically have multiple instances of Visual Studio open at all times which generate 4GB of write IO per hour. Even when Visual Studio is idle it appears to generate large spikes of IO every few seconds. Double Take can only replicate about 800MB in that same time period, so it seems to be playing catch up and is pegging one of the CPU cores on the client at 100%. The machine comes to a near full stop, Visual Studio becomes unresponsive as it seems to be competing with Double Take for IO on the same files.

Is there a way to configure Double Take in a way that will make it play nice on a developers machine? Does anyone have any experience or advice on running Double Take on client machines in general? Is there a different solution that might work better for DR? It does not need to necessarily be live replication.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
Tion
  • 121
  • 4
  • Double-Take does an asynchronous transfer as it is. It sounds like the async buffer is getting overrun with all the I/O ops Visual Studio is doing and is bursting the entire buffer frequently. Considering the loading here, you may want to look at something that does synchronous writes. Your developers will complain since it'll slow down overall performance, but at least it won't bring things to a halt the way its happening now. – sysadmin1138 Aug 03 '10 at 17:04

3 Answers3

1

AFAIK, DT doesn't have anything that would really apply to your scenario. You can use the DT Connection Manager to limit the transmission window (time and\or queue threshhold based), or you can set a bandwidth limit on how much bandwidth DT is allowed to use. You can also have DT compress the replicated data. None of these options look like they'd work for you.

On a side note, why DT? That's an expensive solution for a development machine.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
1

The question you should be asking here (IMO) is why you're putting time and effort into covering client workstations in a DR scenario. What's on those client machines that's critical, and why isn't it stored on the network?

There's likely to be more issues lurking under the surface here. Client workstations are not designed for high availability. There's no redundancy across most of the components.

I suspect even if you get doubletake working for the client replication, you're still putting a sticking plaster over a much deeper problem.

Chris Thorpe
  • 9,903
  • 22
  • 32
  • all the code that is critical is obviously in source control and is stored on the network with HA, real time DR, scheduled backups etc. – Tion Aug 05 '10 at 13:22
0

I'm late to the party, but Visual Studio makes a whole lot of noise in TEMP space on the developer workstation. If you could exclude common temporary file areas from your replication, it might save you.

Larry Silverman
  • 547
  • 6
  • 12