4
I recently bought a box from System76 that has multiple GPU's: one Quadro M6000, and two Tesla K40's.
When I do lspci | grep -i nvidia
it says
05:00.0 VGA compatible controller: NVIDIA Corporation Device 17f0 (rev a1)
05:00.1 Audio device: NVIDIA Corporation Device 0fb0 (rev a1)
06:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
09:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
So, they're there.. But, when I do nvidia-smi -L
it only shows
GPU 0: Quadro M6000 (UUID: GPU-09446504-6a9e-866a-a65d-0f1d55b7657b)
and, ls -l /dev/nvidia*
shows
crw-rw-rw- 1 root root 195, 0 Aug 9 03:29 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Aug 9 03:29 /dev/nvidiactl
crw-rw-rw- 1 root root 248, 0 Aug 12 16:19 /dev/nvidia-uvm
I can't be sure, but I'm guessing /dev/nvidia0
is the Quadro M6000, and perhaps the fact that there isn't a /dev/nvidia1
or a /dev/nvidia2
, is another symptom (or perhaps the cause) of the box not seeing the Tesla K40's.. Also, my test programs that call cudaGetDeviceCount, yields only one GPU..
I'm running Ubuntu 14.04.3, and I've installed cuda_7.0.28_linux.run
(and installed the NVIDIA drivers via that run file.)
Why are the other cards inaccessible? How do I make them accessible?
I had so many issues trying to set up multiple Nvidia cards on Ubuntu I gave up. Better consult with Nvidia support directly: I'd you're into GPU computing they are actually good at helping you, but linux is not their forte – None – 2015-08-13T02:13:52.817