3

I'm unable to find this critical piece of information in spec sheets. Appreciate any insight.

We're purchasing servers for HPC work with intel Xeon Gold 6134 (Skylake) cpus I want maximum memory bandwidth, and not concerned about the total amount of memory available.

Skylake CPUs have 6 memory channels per processor. Lenovo server has 12 DIMMs per processor. Which means 2 DIMMS per channel. Their documentation claims that populating all 12 DIMMS gives the best bandwidth. But, I'm unclear of the penalty of sharing the channel over two DIMMs.

My question is,

Should I buy populate 6 DIMMs (one per each channel) per processor or should I populate 12 DIMMs (two per each channel) per processor for maximum memory bandwidth?

P.S: I already found an anecdotal answer here but it appears speculative as said by author.

1 Answers1

1

Adding answer to my own question.

Source: http://frankdenneman.nl/2015/02/25/memory-deep-dive-ddr4/

It appears that with DDR4 and Haswell cores, Image of performance comparison :

1 DIMM per channel achieved 99 GB/s

2 DIMMs per channel achieved 82 GB/s

3 DIMMs per channel achieved 71 GB/s

That gives me the idea that using more number of DIMMS than the number of channels will cause a hit in memory bandwidth.

  • FYI for others who happen upon this answer: The metrics shown in this answer are for RDIMMs, not LRDIMMs as shown in the same linked article. LRDIMMS reduce the bus load, so more per channel can still get favorable data rates: LRDIMMs are 1x per ch=97mb/sec, 2x per ch=97mb/sec, 3x per ch=71mb/s – KJ7LNW Jul 07 '22 at 20:41