1

I have an application which needs to run lots of double-precision floating point operations in parallel on small datasets. I've just started exploring the possibility of running these computations on a GPU. While comparing performance metrics across different GPUs, I have noticed that prices differ wildly, while the performance metrics do not.

To be more exact, I found the best graphics card for my use-case regarding price-performance ratio to be the NVIDIA Tesla P100 on paper, which offers roughly 5 TFLOPS for double-precision floats. The next better one, the V100, offers 1.4x the performance yet costs 5x as much. The A2, which costs just slightly less than the P100, only has 1% of the TFLOPS. I observed these kinds of changes in cost across the board.

For my use-case of mainly double-precision heavy computations, is it correct to only look at double-precision TFLOPS to determine the theoretical performance of a GPU, or are there othe r factors I should include in the comparison?

Bobface
  • 135
  • 4
  • I think the question would be better answered on https://cs.stackexchange.com/ as it's specifically designed for computing science – djdomi Sep 05 '22 at 17:37

0 Answers0