18

I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine.

It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server.

Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this.

How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this?

What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop.

Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation.

Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

jscott
  • 24,204
  • 8
  • 77
  • 99
Chris Marisic
  • 1,404
  • 8
  • 33
  • 51
  • 1
    Simple things like office, email & web browsers are already thin apps. Pretty much any remote desktop technology is going to use as many if not more resources than them. Really this sort of technology is only useful when you need to share limited/expensive software/hardware resources. It seems to me that you have gone off on a big red herring when really you should be looking at things to make management easier like group policy, sms, etc. Also it raises the question of do people really need the latest office which ties you in to upgrading the OS and hardware but does very little extra? – JamesRyan Jan 23 '10 at 03:27
  • "Also it raises the question of do people really need the latest office which ties you in to upgrading the OS and hardware but does very little extra?" Yes because currently all of workstations are licensed OEM copies of Office 2003 and 2007, I want standardization and to be able to correctly use volume licensed editions. Also I would like an exactly the same base platform instead of mixed 32bit/64bit which these goals would immediately be achieved using virtualization. – Chris Marisic Jan 23 '10 at 03:57

11 Answers11

16

This type of solution exists in a continuum.

On one end of the spectrum you have client computers running a "thick" operating system (like Windows or a desktop Linux distribution) and connecting via client software to hosted applications (via RemoteApp shortcuts and the Remote Desktop Protocol (RDP), or via Citrix ICA protocol).

In the middle of the spectrum you have clients connecting via these same protocols to full-blown desktop sessions (rather than a single application), but using a shared operating system installation. This is typically the world of Windows "Terminal Services".

On the far end of the spectrum you have what's typically known as a Virtual Desktop Infrastructure (VDI) where client devices are very stripped down and only host client software to connect to a hosted operating system instance.

All of these situations are physically feasible, but you'd do yourself a favor to start investigating the licensing costs before you go down the road of spec'ing servers, etc.

The licensing costs in the Microsoft world include either Terminal Services Client Access Licenses or Windows Virtual Enterprise Centralized Desktop (VECD) licenses of operating systems to contend with for each device or user accessing the VDI solution. Licensing for your desktop application software, depending on where on the spectrum you're falling, may also be different than you currently use and this necessitate additional license purchases.

It's likely that you're going to find that the acquisition costs of a VDI infrastructure are similiar, if not more expensive, than going down the traditional "thick client" route. Phyisically and pratically using thin-client devices sounds like a "win", but software licensing expense has traditionally more than made up for any hardware cost savings, which leaves only "soft cost" management and TCO savings as justification.

Edit:

Ryan Bolger hit it right on the head with his answer (and I +1'd him) with respect to "soft cost" savings, which you're right to identify as the place to save money.

Learning how to centrally deploy software, manage user environments, and generally maintain the hell out of your network using Group Policy will build your personal knowledge of the "innards" and operation of a Windows network and will have far fewer "moving parts" than a VDI infrastructure. Even if you had a VDI infrastructure, frankly, I think you'd still be able to leverage immense benefits from Group Policy-fu.

VDI and remote application delivery is a great solution for very task-specific application, or delivery of applications over slow or unreliable network connections (think "shared Microsoft Access database over a T1-based WAN"). I don't think that desktop virtualization, at least in the current incarnation as an excessive-licensing-fee-based minefield, is "the answer".

I'll even jump out on a limb and say that, with proper "care and feeding" maintenance of very large fleets of client computers running Windows isn't really all that hard, using the built-in tools in Windows Server, WSUS, good knowledge of scripting, and an understanding of how Windows itself and your application software works. Automating your client computer build, removing users' Administrator rights, and getting a handle on your OS and application update deployment infrastructure will take you leaps and bounds ahead.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • After reading your post and the VECD license page is my understanding correct that I could use a copy of Server 2008 Datacenter and pay $110/workstation and each virtual workstation would be able to have Win7 Enterprise? – Chris Marisic Jan 15 '10 at 16:36
  • Also the goal isn't purely a upfront cost reduction it's to eliminate the work of upgrading each workstation, of dealing with each workstation has different hardware than 1/3 the others, that most are all 32bit but now a few machines are 64bit. And since our site has no solely IT staff just development staff that supports it, the soft cost of reducing that labor is our primary goal. – Chris Marisic Jan 15 '10 at 17:39
  • 5
    Your thoughts about soft costs are incorrect; you'll still need to manage just as many (virtualized) workstations, while still supporting the client hardware. You'll also find yourself in deep water _every_ time the server has a problem, since _nobody_ will be able to work without it. This literally means _nobody_. IMO, 25-50 PCs isn't too much for a single person, even if it means hiring somebody part-time to manage it. As for the upgrade, buy all the same model PCs from a major vendor, build a standard image, follow best practices, etc., and you'll be fine. – Joe Internet Jan 15 '10 at 20:39
  • 3
    Don't buy those 25-50 workstations. Lease 'em. But, what Joe Internet said... – Adrien Jan 15 '10 at 20:46
16

I'd like to build a bit off of Evan's answer regarding the different ways to remotely host applications.

Your primary concern seems to be about reducing the administrative overhead involved with managing a bunch of disparate workstations and their individual software installations. You don't need to move to a remotely hosted application infrastructure to accomplish that goal.

With a single server setup as a domain controller and all of your workstations joined to that domain, you can do just about everything you need right out of the box. The domain itself handles centrally configured user accounts. Group Policy can handle configuring all of the system settings on the workstations. And Group Policy Software Deployment can handle your application installations. The built-in Windows Deployment Services combined with the free Microsoft Deployment Toolkit can even give you your OS deployment solution. WSUS is also free and can handle your OS and Microsoft software patching.

There's just a ton of stuff you can do with nothing more than a single server OS license and your workstation OS licenses. It all has a bit of learning curve, but it's no more difficult than the things you'll have to learn with a remotely hosted app or OS solution.

Ryan Bolger
  • 16,472
  • 3
  • 40
  • 59
  • 1
    +1 - You beat me to it... Shows that I shouldn't step away from ServerFault to go do billable work, eh? – Evan Anderson Jan 15 '10 at 19:19
  • Yes but the goal was to take care of all of this in one fell swoop in our move to the Win7 platform and not need to deal the physical workstations any longer. – Chris Marisic Jan 15 '10 at 19:38
  • 3
    You're going to have some sort of physical workstation regardless of the solution you pick. Even if you end up with true "thin clients", that's just code for "low end workstation that might not have a hard drive". Ultimately the different solutions determine what ends up on the local workstations, how it gets there, and what is left on the server(s). – Ryan Bolger Jan 16 '10 at 00:02
4

We are in the mid stages of planning desktop virtualization to a few hundred users, and there are a lot of subtle gotchas. One is the fact that the alleged "dumb terminals" are not so cheap, and of course need software patches as well! However, less than a full OS install sure. Next gotcha is some exec that "has" to have something that the dumb terminal does not do and blows the model away. Then remote access. Then VoIP. Then, VMWare is more expensive then you thought. Sheesh ...

JamesR
  • 1,061
  • 5
  • 6
  • 1
    My goal would be to have them run purely on server so they actually have their own VHD's that way anytime a person needed specific software i'd just take a new back up of their VHD after installing it allowing an easy restoration if they ever damage their instance from idiocy/malware/etc. – Chris Marisic Jan 15 '10 at 17:37
3

We have used both XenServer from Citrix and VMware ESX to virtualize workstations. XenServer is free and I believe the ESXi version is as well. Citrix also makes a product called Provisioning Server which makes it very simple to create, modify, and deploy virtual workstations with shared configurations.

As mentioned above, you'll want redundant servers if you go this route to help prevent outages.

Having said all this, it's been my experience that virtualizing workstations is only a good idea when you have a specific reason for doing this - for example, workstations at a remote site where you won't be able to go out and deploy software updates. For general computing, it's more of a hassle than it's really worth, and you won't end up saving that much money. And, especially for a small organization, the KISS principle would generally overrule using thin clients.

Graham Powell
  • 410
  • 2
  • 8
  • Citrix has some interesting product offerings, but be sure that you think about what the per-device licensing cost is going to look like if you start getting into their "for pay" offerings. I sat a sales presentation a few weeks ago from Citrix and, if memory serves, XenDesktop "VDI Edition" is $95.00 per user/device, or $195.00 per concurrent user at list price. – Evan Anderson Jan 15 '10 at 16:17
  • note that that price is ON TOP of the microsoft terminal server license – Jim B Jan 21 '10 at 17:37
2

I'd look a lot at the Sun Ray desktop boxes. They work quite well (assuming you have enough backend horsepower), even in Windows shops, and they're fairly cheap compared to normal desktops.

Bill Weiss
  • 10,782
  • 3
  • 37
  • 65
  • Care to add some links? – Chris Marisic Jan 15 '10 at 22:10
  • google "Sun Ray desktop" – Sam Jan 22 '10 at 00:35
  • Don't know why I didn't see this before. The basic unit is at http://www.sun.com/software/index.jsp?cat=Desktop&subcat=Sun%20Ray%20Clients . You run their software (which is free) on a Linux/Solaris server to give the boxes a desktop. There's a mode in which that desktop is a Citrix terminal pointed at your big ol' farm of Windows boxes. – Bill Weiss Jan 22 '10 at 16:33
  • The Sun Ray 2 (or 2FS) are the specific units I've used. Sun will also gladly send you whitepapers and stuff talking about deployments. – Bill Weiss Jan 22 '10 at 16:34
1

The biggest question in my mind is: Can you be OK with the possibility of losing EVERYTHING in one go? Is your boss OK with that?

If you put everyone's work on one server, (I'm assuming you'll have proper backups, etc) it's still possible for that server to fail. Is it OK for the failure of one server to take out the entire company for a day or so while you replace it, rebuild it, and get it back in operation?

I'd never even consider that solution, just because it creates such a wide-acting single point of failure, but your mileage may vary.

Michael Kohne
  • 2,284
  • 1
  • 16
  • 29
  • 3
    Certainly, eliminating every single point of failure is a path to diminishing returns, but splitting a VDI infrastructure over a couple of server computers, even in a small business scenario, isn't infeasibly expensive or technically difficult. If the downtime will cost enough to justify it, just use a second server computer (third, etc). – Evan Anderson Jan 15 '10 at 16:12
  • I would want to host the data in a ISCSI SAN and at some point add additional servers to hopefully be able to set them up as load balanced and if one fails the other(s) pick up the work – Chris Marisic Jan 15 '10 at 16:38
  • Most ESX(i) projects are a pain in the ass^H^H^H neck, if you don't have shared storage. ESX brings everything needed out of the box, a cluster aware file system for your vmdks, a fail-over/high availability/even fault tolerant cluster for your VMs. – pfo Jan 15 '10 at 19:59
  • Yes part of the reasoning for all of this is more justification of the shared storage I want instead of ending up with a full virtual system that sits idle 98% of the day when I'm not using it as my development/staging servers. – Chris Marisic Jan 15 '10 at 20:08
1

The RHEV VDI is about to come out, is features spice (a protocol that beats rdp/ica) and quite a few other features.

have a look at http://www.redhat.com/ and of course http://www.redhat.com/virtualization/rhev/desktop/

dyasny
  • 18,482
  • 6
  • 48
  • 63
1

One of the things most folks don't get when going to a VDI is that your adminstrative costs don't necessarily go down, they go up as now you get to manage 2 distinct desktop environments for every user. One of the big cost saving benefits of VDI is in software management and hardware management, but not because it's virutal. VDI is usually a great way to force IT to manage software deployment better and you generally get a more locked down environment (no more developers installing tools as they please on their desktop). If you try to migrate a mismangaged desktop environment to VMs it's far more likely to be more expensive than buying workstations and properly managing your environment. In addition their is usually a cost associated with the underlying hypervisor, and that takes additional management skills.

Jim B
  • 23,938
  • 4
  • 35
  • 58
  • 1
    "(no more developers installing tools as they please on their desktop)." haha no thanks to that! We're going to keep our physical machines! You can't expect developers to not have admin rights, I wouldn't work at a place that didn't give me admin on my own box. – Chris Marisic Jan 22 '10 at 13:12
  • 1
    the only folks I don't let have admin rights if they ask are developers. Most of the time they want it to install some tool they can't have. – Jim B Jan 23 '10 at 01:11
  • 2
    Glad I don't work at the company you do! Those tools can save us hours if not days worth of wasted time which invisibly costs a company thousands of dollars. Of course it's always shocked me at how many poor decisions companies make to ensure their people are less productive. – Chris Marisic Jan 23 '10 at 04:01
  • 1
    Same with a company not wanting to purchase a tool that costs a trivial amount $300~ that will save a developer 100s of hours over a year (specifically referring to resharper in this case, but i'm sure there's equivalent tools for all types of other developers) – Chris Marisic Jan 23 '10 at 04:03
  • 2
    the problem is it visibly costs companies thousands of dollars when we spend time rebuilding servers and workstations that some developer hosed by screwing around with permissions and such as well as adding software that makes the system unstable, (of coourse they have no change documentation to figure out what's killed it), or worse yet is pirated. We don't restrict developers from having tools- they just can't install it themselves and they have to justify it- like everyone else. If a developer must have a box to test on with admin access it's a vm without network access – Jim B Jan 23 '10 at 05:01
0

The case study of Largo, Florida, may prove informative. They migrated a significant number of non-technical users to a Linux based thin-client network design and realized significant cost savings as well as increased productivity (due to reduced workstation downtime and improved user data backup) as a result. Slashdot profiled the city several years ago. Since that article, it seems that the city migrated to a Citrix solution.

pcapademic
  • 1,650
  • 1
  • 14
  • 22
0

What you are describing is best served by Terminal Services, rather than virtualisation. Regardless, I think that by the time you get the costs for the server(s) to able able to handle such a load, plus the cost of thin clients, you'll find it's a lot cheaper to have separate workstations.

The maintenance of separate machines is no harder or more work than that of either TS or virtual machines, when done properly. On the other hand, having people be able to work when the server has down time is a huge plus in most cases.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
  • Part of the reasoning is we can use the server hw for more than just the clients and we won't need to buy any new workstations or upgrade them in anyway (except perhaps to add gbit nics if we ever need more of them to be able to use our lan at gbit) – Chris Marisic Jan 26 '10 at 15:54
0

I am surprised that I didn't see anyone name VMware View specifically. I think View is one of the best VDI solutions out there.

http://www.vmware.com/products/view/

"Get Your Desktop “To Go” with VMware View

Move towards user-centric computing and transform traditional stationary desktops into untethered stateless workspaces available from anywhere and at anytime. VMware View modernizes desktops and applications by moving them into the cloud and delivering them as a managed service. Processes are automated and efficient, security is increased and the total cost of desktop ownership is reduced by 50%. End users get a rich, consistent and high performance desktop experience from any qualified device whether in the office or on the go."

Chadddada
  • 1,670
  • 1
  • 19
  • 26