16

Going to cut right to the point on this question, as I'm after as diverse range of solutions as possible so don't want to effect any opinions with the question too much.

  • Client is a UK based company.
  • Organisation is 95% Windows with AD
  • They have an IT policy of keeping as little infrastructure "on premises" as possible, as such they have a 1Gbps line to a data centre which houses all server infrastructure.
  • UK branches who can't justify a high speed link run a local server and Windows DFS for fast file access with synchronisation - works fine.
  • This company have decided to open an office in Sydney, Australia.
  • Currently they have 20 people in this office, as well as 1 man "presences" around the country.
  • They are having issues with both latency and bandwidth accessing the UK. Typical tests from their office yield usually no greater than 4Mbps and 320ms on a good day.
  • The high latency is preventing use of terminal services.
  • They need access to a lot of the same data as the UK staff.

We've had quite a few ideas already, but I'd like thoughts on how the users of ServerFault would solve this problem. Feel free to ask questions :)

SimonJGreen
  • 3,195
  • 5
  • 30
  • 55

4 Answers4

28

Welcome welcome welcome to the world of the Internet in Australia.

Even in our largest population center, we can struggle to get 3Mbps downstream on a business-class ADSL2+ connection. Cable penetration is poor in residential areas, and even worse in commercial so unless you're fortunate you can't get cable internet. And because we're such a sparse population spread over such a massive area (4 million people in an area about the size of New York City?) wireless solutions are just as crappy, and expensive because they don't have 18 million potential customers.

I'm in the same situation as you (in case you can't tell), where we have users in a different capital city who have 200ms latency between our terminal servers and their office.

Solutions. Well, they're all mucky I'm afraid:

  1. DFS. You mention you have DFS in your UK branches already. Can these be extended to your Australian office as well? Depending on the size of the folders, it may a good idea to load up a 2Tb drive with a copy of the DFS root, air-mail it to Sydney, get them to copy it onto their local server and then set up the DFS to sync the changes between the two.

  2. Terminal Services. You're sort of screwed here to be honest. High latency does not play well with real-time applications, and apart from changing the laws of physics, if it takes 300ms for the data to get there and back, it will take at least 300ms to register the mouse click, plus about 5 seconds to render whatever context window it opened. BUT, there are things you can do:

    • In terms of bandwidth, a terminal server session consumes about 30Kbps. This is less than a dial-up modem. Citrix consumes about 20Kbps and reportedly has better functionality for dealing with high latency.
    • Lower the colours to 16-bit
    • Disable drive and printer redirection
    • Are you having trouble with the server thinking that the clients no longer exist and terminating their sessions? You can increase the number of "failed" contact attempts it takes to drop a session in the registry at [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters] TcpMaxDataRetransmission; it's hex so 0000000a is 10. Default is 3.
    • Use QoS heavilly on the remote networks. You don't need someone's pornYoutube stopping people from actually getting work done
    • Citrix has a product that doesn't actually lower the latency, but greatly lowers its noticability. For example, it renders entered text client-side before sending it server-side, so it looks like the text has been entered whilst it's still only half-way to its destination. I forget what it's called
    • Printing over terminal services sucks. Even in 2008 R2 with EasyPrint, the XPS files can become massive. Look into something like ThinPrint or Screwdrivers if you're going to be doing a lot of printing from the terminal server.
  3. Choice of internet provider. Australia has two major international link providers. One of them is owned by Telstra, who for many, many years had a total monopoly on the market. They used to be a government-owned company before they were privatised. They still own all the aging, shitty copper lines to the premises, all the exchanges, most of the equipment inside the exchanges and most of the communication between the exchanges. They also held the rest of the country by the balls when it came to international data exchange. Then a few other companies (mainly iiNet and Internode, if I remember correctly) forked out a shitload of money and got their own international link. Try getting a 2nd line from a different ISP and see how it goes. If one line is with Telstra, try iiNet or Internode. If you're already with iiNet/Internet/Optus, try getting a Teltra link (god, I feel dirty just writing that). Steer clear of the low-budget carriers (Dodo, TPG) as they over-sell their services and although 1Tb of quota a month sounds great, when their core routers are overloaded because they're just Cisco 800's (ok, that's an exaggeration) then you're never going to get good quality of service.

  4. Wait. The Australian government is in the process of rolling out a Fibre to the Premesis project called the National Broadband Network. If you're not in one of the planned development areas, then you might be in for a long wait (5+ years). But if the office has not been established (sounds like it has though), then if you can get a convenient location inside the NBN rollout, then that could be worth it (it could be established anywhere from 6 months to 3 years though). 100Mbs fibre terminated at your front door should be a pretty good deal. However, if we have a change of government in the next election (which is highly possible) then you can be assured they will can the NBN and replace it with an LTE wireless network which, whilst reasonable for checking emails on your Blackberry and stalking your ex girlfriend on Facebook, will not be as amazeballs as the NBN.

Apart from all of the above, which are bandaids at best, the other option might be to see if the software they're running can be extended to multiple sites. SQL Merge replication is a common one, but the database and software usually have to be designed to take advantage of it. If you can, then perhaps an always-on merge replication and a local terminal server/application is the way to go.

Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
  • 4
    I'm tempted to downvote you just for suggesting the use of Telstra. ;) However, you've done a good job of describing our frustrations and limitations, so have a +1 instead. – John Gardeniers May 15 '12 at 02:09
  • 1
    Great work @Mark Henderson. As an aside, similar conclusions and advice can be drawn for New Zealand (where I reside). Replace "Telstra" with "Telecom", and "NBN" for "UFB". – Ashley May 22 '12 at 02:17
3

In addition to Mark's excellent answer, I'd like to suggest that you consider some WAN acceleration technology. Something with a few TB of storage at each end that will only send references to data it's already sent recently enough that it's still in cache. Riverbed and Cisco both do this.

They also do protocol acceleration, which locally emulates some of the traffic they know they'll be seeing for certain chatty file protocols. That said, protocol emulation doesn't work across the board, so you might want to ensure passthrough for certain applications.

Basil
  • 8,811
  • 3
  • 37
  • 73
2

That latency is going to kill you (as it is already) for TS/RDS. If you can't address that, no amount of bandwidth or tweaking is going to help.

I might suggest though that you give RemoteApp a try. The underlying component is still TS/RDS and behind the scenes there's still a full desktop session but owing to the fact that only application windows are presented to the user this might cut down on the amount of data traversing the link, making RemoteApp a passable solution for you.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
2

BranchCache is a great technology for caching file shares across WAN links. May beat extending the whole DFS mount across the WAN. Local files will be cached for as long as there is storage to hold them. Obviously writes still need to be sent back to base, but I'm pretty sure BranchCache can help with them as well.

Ashley
  • 650
  • 1
  • 6
  • 15