3

If I use VMWare with 4 virtual-machines and 4 GPUs (nVidia Quadro/Tesla), then can I give (move) some devices to different virtual-machines for each by a single card?

If I have one of these CPU, then I have IOMMU: http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware#CPUs

Peripheral memory paging can be supported by an IOMMU.: http://en.wikipedia.org/wiki/IOMMU#Advantages

I.e. if I have IOMMU, then I have Intel's "Virtualization Technology for Directed I/O" (VT-d) which make able to do what I want.

But when we use nVidia GPU with CUDA >= 5.0, we can use RDMA GPUDirect, and know that:

http://docs.nvidia.com/cuda/gpudirect-rdma/index.html#how-gpudirect-rdma-works

Traditionally, resources like BAR windows are mapped to user or kernel address space using the CPU's MMU as memory mapped I/O (MMIO) addresses. However, because current operating systems don't have sufficient mechanisms for exchanging MMIO regions between drivers, the NVIDIA kernel driver exports functions to perform the necessary address translations and mappings.

http://docs.nvidia.com/cuda/gpudirect-rdma/index.html#supported-systems

RDMA for GPUDirect currently relies upon all physical addresses being the same from the PCI devices' point of view. This makes it incompatible with IOMMUs and hence they must be disabled for RDMA for GPUDirect to work.

Why nVidia recomended to disable IOMMU, and can I use IOMMU on Sandy/Ivy-Bridge for VT-d to give each virtual machine one at a time GPU?

Alex
  • 171
  • 4

0 Answers0