After configuring a system with 2 Tesla K80 cards, I noticed when running nvidia-smi
that one of the 4 GPUs was under heavy load despite there being "No running processes found". Why is this happening and how do I correct this?
Here is the output from nvidia-smi
:
➜ compute-0-1: ~/> nvidia-smi
Mon Sep 26 14:48:00 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.77 Driver Version: 361.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:05:00.0 Off | 0 |
| N/A 34C P0 57W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K80 Off | 0000:06:00.0 Off | 0 |
| N/A 26C P0 76W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla K80 Off | 0000:85:00.0 Off | 0 |
| N/A 33C P0 60W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla K80 Off | 0000:86:00.0 Off | 0 |
| N/A 24C P0 74W / 149W | 0MiB / 11441MiB | 71% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+