Anyone know of an alternative to ScaleMP? They let several x86 boxes boot as one large box. Theoretically AMD's hypertransport should allow the same thing.
Any other companies or OSS projects doing this?
Anyone know of an alternative to ScaleMP? They let several x86 boxes boot as one large box. Theoretically AMD's hypertransport should allow the same thing.
Any other companies or OSS projects doing this?
You should distinct between three types of technologies:
1) OS Kernel mods (in this case, modules or kernel patches)
[This technology is software-based]
In the technologies listed above, you modify the OS to provide the user a "feel" of an SMP, and you run an instance of the OS (Linux) on every one of the nodes. For example, LinuxPMI clearly states on their homepage that:
"LinuxPMI is a set of Linux kernel patches implementing process "migration" over a network. Its goal is to allow you to move a program from your machine to another machine, run it there for a while, and return it without it ever knowing it was gone"
While this functionality is neat, it is far from meeting what most users would spec as requirements for SMP. None of these technologies enable, for example, a single application to transparently access and make use of resources (RAM, CPUs, or IO devices) across multiple physical nodes. Thus, those technologies could not really be considered an alternative to the technology from ScaleMP.
2) Virtualization Aggregation technologies
[This technology is software-based]
ScaleMP is in this space. A couple of other companies operated here in the past: Virtual Iron (defunct, assets bought by Oracle), and 3LeafNetworks (defunct, assets bought by Huawei of China).
These technologies enable the creation of a virtual SMP. The single OS running on-top of this SMP is either not aware of the virtualization (ScaleMP, 3Leaf) or uses ParaVirt to function properly (Virtual Iron).
Using these technologies your application can transparently map and use RAM which is larger than the RAM available in a single physical node, use computing cores from multiple cluster nodes for the same threaded application, have a processor from one physical system read from a hard drive of another physical system and transmit the data from the NIC of yet another physical system.
This kind of functionality is what makes these technologies a viable alternative to the next group.
3) NUMA interconnects
[This technology is hardware-based]
Over the years, different companies have developed special chipsets to enable the creation of large SMP machines. Sequent was among the first to create such a chipset for x86 environments (it was acquired by IBM, and its technology is still inside the IBM XA-based servers, now at revision eX5). SGI has the NUMAlink now in use in their Altix UV line of products. Bull, a server vendor from France, has the MESCA chip in its scale-up servers. Those companies market the overall solution (a server product) and you cannot buy “just the interconnect” from them – and all offer only Intel-Xeon-based systems with their scale-up technology. Another company, called NumaScale, provides a connector-based (Adapter) which enables the aggregation of multiple AMD-Opteron-based systems; with NumaScale you could potentially create a "Do-It-Yourself" SMP out of cluster nodes.
There's a company called Numascale that sells an adapter card containing a directory-based cache coherency and a router for a 3D torus network, allowing one to build ccNUMA machines out of smaller building blocks. The catch is that it's a HTX card, motherboards are probably in short supply. Numascale also offers a card that plugs into a PCIe slot for power and picks up HT signals from an empty CPU socket for use with other motherboards not equipped with an HTX connector.