What's the difference between GPU Memory bandwidth and speed?

16

2

I was looking at Nvidia's series 10 graphics cards' specs and noticed they have memory speed and memory bandwidth specified. Memory speed is expressed in Gbps and memory bandwidth is expressed in GB/sec. To me, that looks like memory speed divided by 8 should be equal to memory bandwidth, since 8 bits make up one Byte and all the other units are the same, but that is not the case.

I was wondering if someone could explain to me, what actually indicates a real transfer rate of data. If there were 2 GPUs, one with higher memory speed(Gbps) and the other with higher memory bandwidth(GB/sec), which one could transfer more data in some fixed timeframe(or is that impossible and these 2 things are somehow linked in some way)?

Am I missing something here? I can't seem to find a good answer anywhere... What is actually important here? And why are both measurements expressed with almost the same units (since a Byte is 8 bits, one measurement should be equal to another, if you convert both to bits or to bytes)?

Evidence here and here(click "VIEW FULL SPECS" in the SPECS section).

BassGuitarPanda

Posted 2017-03-07T15:20:42.057

Reputation: 193

Answers

17

There are two separate things being specified here. I have copied the Nvidia spec from the page you linked to show it better.

enter image description here

One is the memory chip data line interface speed of 8gbps which is part of the GDDR5 spec, and the next is the aggregate memory speed of 256GB/s.

GDDR5 memory is typically 32 bits wide so the math (for the 1070) goes as follows:

  • 8 gbps per line
  • 32 lines per chip
  • 8 memory chips on card

Multiplying up this gives us a memory speed of 2048gbps, divide that by 8 and we get the memory bandwidth of 256GB/s.

The 8 chips at 32-bits per chip also matches the memory interface width of 256-bit, so you could easily do (8gbps * 256-bits) / 8 bits-per-byte (which neatly cancels down to simply "256") and come up with the same figure.

For the 1080: 10gbps * 256b / 8 = 320GB/s
For the 1050: 7gbps * 128b / 8 = 112GB/s


If you have two devices that have the same gbps but different GB/s then that tells you that it has a different number of chips in the memory bank. Typically you'd want to choose the higher aggregate memory bandwidth (GB/s) as this will generally be the actual useful memory bandwidth.

A device with 10gbps per pin but only 4 chips would have a total bandwidth of 160GB/s ( (10 * 32 * 4) divided by 8) which would be lower than the 8gbps across 8 chips (256GB/s) I showed you above for the 1070.

Mokubai

Posted 2017-03-07T15:20:42.057

Reputation: 64 434

Thank you for answering. Good, easily understandable explanation with important details. This helped me a lot :) – BassGuitarPanda – 2017-03-07T16:04:34.883

4@BassGuitarPanda you are very welcome. I admit I was a little baffled to begin with as well. They had two seemingly contradictory values for memory bandwidth which only made sense once I realised that one was a bandwidth-per-data-line figure. I learnt something myself as well, so thank you for a clear and well asked question. – Mokubai – 2017-03-07T16:23:24.640