14

Wikipedia says that the Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300 W.

I use an EC2 instance (g3s.xlarge) which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:

> sudo nvidia-smi
Tue Mar 12 00:13:10 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M60           On   | 00000000:00:1E.0 Off |                    0 |
| N/A   43C    P0    37W / 150W |   7373MiB /  7618MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      6779      C   python                                      7362MiB |
+-----------------------------------------------------------------------------+

What does it mean? Do I get a 'half' of the card? Is the Tesla M60 actually two cards stuck together as the ram specification (2x8) suggests?

hans
  • 242
  • 2
  • 8

1 Answers1

20

Yes, the Tesla M60 is two GPUs stuck together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940