How does such a small Teradici card offer high resolution, full FPS 3D graphics (1:38) http://www.youtube.com/watch?v=eXA4QMmfY5Y&feature=player_detailpage#t=97s
for ESXi 5.0/5.1 VDI environments? We're shooting for an AutoCAD/SolidWorks/YouTube 1080p capable environment. I can't see how such a small and low profile card could possibly have the horsepower to handle such GPU computations for a big environment like that. We're going to have up to 64 VDIs per server, and are a 500-1000 employee count sized company. Someone enlighten me please!
Determining which route to go (between RemoteFX and VMware View/PCoIP) and the hardware (NVIDIA 4GB non-Quadro/Tesla GPUs vs Teradici card). Servers have three 4x, three 8x, and one 16x PCI-E lane. Two of the 8x lanes will be occupied by SAS RAID cards.
EDIT: I'm not quite sure where I am going wrong. I myself do not fully understand how the Teradici PCI-E card works, but do have some understanding (just not enough to really know what it does). One of the contributions does help to clear up some of the fog for what the card actually does (acts as a video encoder to send over TCP/IP, along with input data such as mouse movements, keyboard entry, etc). My supervisor had attended a day-long VMware conference in St. Louis, MO, and one of the things that he was introduced to was Teradici's work in PCoIP.
I think both my supervisor and I have some fog in our minds about what Teradici's card actually does (or did) -- where I got stuck was on for the life of myself trying to figure out what the card actually does because Teradici isn't being clear and straight-technical (i.e. "simply transcodes bitmaps into video to send over TCP/IP") with a lot of Sales/Marketing-style presentation (not that Teradici's card and PCoIP is bad or anything). I wasn't sure whether this card is meant to be a standalone (no GPUs necessary), or should be combined with GPUs (for our applications), or not bother with Teradici at all, etcetera.
Originally we have been planning to put in about two NVIDIA GeForce 670/680 4GB cards in SLI per server -- purchasing roughly a couple dozen cards total to be shipped to Rackspace DCs around the world. According to Microsoft, 2GB of VRAM will support 16 VDIs when things are computationally more intensive (GPU-wise ofc); this is in regards to RemoteFX best or recommended practice. For those that do not know, "RemoteFX" is the nextgen title to what you might know as Remote Deskop Protocol (RDP), just as Remote Desktop Services is to Terminal Services. We're looking to support at least 64 VDIs per server (these are big berthas equipped with lots of RAM, dual hexacore Xeons, etc); the entire server infrastructure setup is going to be ESXi/vSphere-powered.
The first initial reason for getting GPUs, as you may gather at this point, is to have sufficient VRAM to support VDIs.
The second reason is because we are considering and desiring to move as many users away from desktop/tower PCs (including AutoCAD/SolidWorks/3DSM/Adobe Creative Suite users) to VDIs in the cloud, and utilizing ThinClients and ZeroClients more. Since these applications greatly benefit having a GPU, employing the right hardware and right solutions helps all.