The type of cluster that presents as a single operating system with lots of memory, multiple CPUs and can run whatever would normally run on the non-clustered version of that OS is called a Single System Image. This takes multiple cluster nodes and does just what you said, merges them into a single OS instance.
This is not commonly done because such a system is extremely hard to engineer correctly, and systems that cluster at the application level instead of the OS level are a lot easier to set up and often perform much better.
The reason for the performance difference has to do with assumptions. A process running on an OS assumes all of its available resources are local. A Cluster-ready process (such as a render farm) assumes that some resources are local and some are remote. Because of the differences of assumption how resources are allocated are very different.
Taking a general-purpose single-node operating system like Linux and converting it into a SSI-style cluster takes a lot of reworking of kernel internals. Concepts such as memory locality (see also: numa) are extremely important on such a system, and the cost of switching a process to a different CPU can be a lot higher. Secondly, a concept not really present in Linux, locality of CPU, is also very important; if you have a multi-threaded process, having two processes running on one node and two on another can perform a lot slower than all four running on the same node. It is up to the operating system to make local vs remote choices for processes that are likely blind to such distinctions.
However, if you have a cluster-ready application (such as those listed by Chopper) the application itself will make local/remote decisions. The application is fully aware of the local vs remote implications of operations and will act accordingly.