If you are talking about adding their computing power together to make one big supercomputer, then no (that isn't virtualization, that is cluster/distributed computing, and would require specially written software that would take advantage of that environment). Virtualization is the exact opposite, taking one computer with a large amount of resources, and subdividing them amongst smaller applications (which avoids wasting resources. Not many things require a dozen GB of RAM or more, for example). Servers typically use Microsoft Hyper-V (which comes with recent editions of Windows Server), or VMWare ESXi (a free, mature hypervisor, but the management tools will cost you lots of $$$).
What you may be thinking of is Desktop Virtualization, where the actual desktops for each worker are thin clients/dumb terminals, which then connect to the central server where all of the users' programs are being run. This is similar to Terminal Services.
Edit: To elaborate a bit more, I am not aware of any hypervisors that "pool" resources from client machines. This question is somewhat analogous to the question of "If I have 4 cores running at 2GHz, can I combine them into an 8GHz processor?". The general answer in both cases is no. Of course, there are specialized exceptions, such as some kind of multiple-host VM, or a massively-parallel distributed application. But if this was so simple, why don't big companies like Microsoft pool all of their computing resources into a giant computer with thousands of cores and terabytes of memory? The answer: you can't.