7

Recently, I had our server admin tell me that the new servers we'd ordered with 140GB of RAM on them had "too much" ram and that servers started to suffer with more than about 80GB, since that was "the optimal amount". Was he blowing smoke, or is there really a performance problem with more RAM than a certain level? I could see the argument - more for the OS to manage, etc - but is that legitimate, or will the extra breathing room more than make up for the management?

I'm not asking "Will I use it all" (it's a SQL Server cluster with dozens of instances, so I suspect I will, but that's not relevant to my question), but just whether too much can cause problems. I'd always assumed that more is better, but maybe there's a limit to that.

SqlRyan
  • 906
  • 5
  • 13
  • 22
  • I almost hesitate to say this, but could you ask & relay what his reasons are? It seems so much smoke I find myself curious what made him decide on that specific value. – Twirrim Mar 26 '11 at 01:06
  • Perhaps he meant that the system will run out of other resources before the ram runs out. Maybe other servers only see 80gb utilized max, so he is simply being money conscious. – spuder Dec 26 '14 at 18:51

7 Answers7

16

There are a few thresholds out there for 'too much', though they're special cases.

In 32-bit land, PAE is what allows you to access memory over the 4GB line. The theoretical max for 32-bit machines is 64GB of RAM, which reflects the extra 4 bits PAE gives memory addresses. 64GB is less than 80GB.

From there we get processor-specific issues. 64-bit processors currently use between 40 and 48 bits internally for addressing memory which gives a maximum memory limit of between 1TB and 256TB. Both way more than 80GB.

Unless he has some clear reasons for why SQL Server can't handle that much memory, the base OS and hardware can do so without breaking a sweat.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • I was going to say: the more someone knows about the subject the more complicated the answer needs to be. There are definitely going to be special cases where this could be true. My first thought about question was that the sysadmin took an assembler class that illustrated how more cache can result in more cache misses (given certain specifics) - but this common teaching is typically misinterpreted. – Leo Nov 01 '10 at 17:13
  • Addressability isn't the only concern though is it?. I'll give an example https://ark.intel.com/content/www/us/en/ark/products/64596/intel-xeon-processor-e5-2690-20m-cache-2-90-ghz-8-00-gt-s-intel-qpi.html suggests a maximum memory bandwidth below 64GB, so a single socket server with > 128GB if I'm reading it right cannot address all the RAM within 1s. So using 256GB RAM might be sub-optimal (I actually don't know, but feel that was what OP's SA meant) – MrMesees Jan 04 '20 at 06:31
  • PAE isn't actually limited to 36-bit physical address. That was just the first-gen implementation in PPro. x86-64 [used the same format as PAE for its page-table entries](https://stackoverflow.com/questions/46509152/why-in-x86-64-the-virtual-address-are-4-bits-shorter-than-physical-48-bits-vs), and modern HW supports more physical bits. These do still work in 32-bit mode with PAE, just like in 64-bit mode. – Peter Cordes Sep 21 '21 at 15:39
  • See [Brendan's answer here](https://stackoverflow.com/questions/57858426/is-it-possible-that-a-32-bit-operating-system-use-more-than-4-gb-memory-and-how/57861352#57861352); he's a frequent contributor to asm / osdev questions with practical experience, I'm pretty confident about taking his word on this. And like I said, it makes perfect sense from a hardware-design perspective. See [How to get physical and virtual address bits with C/C++ by CPUID command](https://stackoverflow.com/q/64513535) for how to query the actual supported phys addr size – Peter Cordes Sep 21 '21 at 15:48
  • Although of course using even 64GiB of RAM with a 32-bit kernel with a 3:1 split (only 1GiB of lowmem) is pretty terrible. [Confusion about different meanings of "HighMem" in Linux Kernel](https://stackoverflow.com/q/68091247). For example problems like https://access.redhat.com/discussions/671383 / [Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM](https://serverfault.com/q/217219). So always use a 64-bit kernel, even if you use 32-bit user-space. – Peter Cordes Sep 21 '21 at 15:56
10

He was blowing smoke - if he'd said 4GB and you were using 32-bit operating systems then he might have had half of an argument but no, 80GB is just a number he's pulled out of the air.

Obviously there are some problems if memory isn't 'bought wisely', for instance larger DIMMs usually cost more than twice the price of the half-size versions (i.e. 16GB DIMMS are more that twice 8GB DIMMS) and you can slow a machine down quite a way by not using the right number/size/layout of memory but it'll still be very fast. Also of course the more memory you have the more there is to break but I'm sure you'll be happy with that system for what you're asking of it.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
7

I apologize but most of these answers are incorrect. There is, in fact, a point at which more RAM will run slower. For HP servers with 18 slots, like the G7, filling all 18 of the slots will cause memory to run at 800 instead of 1333. See here for some specs:

http://www8.hp.com/h20195/v2/GetHTML.aspx?docname=c04286665

(Click on Memory, of course.)

Typical memory config with 12 slots filled will be 48G (all 4s), 72G (8s and 4s), 96G (all 8s), etc... when you say "140G" I assume you really mean 144G, which would very likely be 8G in all 18 slots. This would in fact slow you down.

Now, from what research I have done it appears the slower memory speed doesn't affect a lot of applications, but one thing it is known to affect is database apps. In this case you say it's for a SQL cluster so yes, for that, too much RAM could slow you down.

It's possible the server admin you talked to knew this from practical experience without knowing the exact technical reason.

Hope that helps,

-Jody-

Jody M
  • 71
  • 1
  • 1
  • 1
    This answer still isn't quite correct either, as this doesn't apply to all systems. Some systems can have all their slots filled with RAM and run them at full speed. – austinian Dec 26 '14 at 19:49
4

Take it to an extreme and say you have Petabytes of memory: The system (cpu) is not going to work harder to manage memory mappings. The OS should be smart enough to consume this memory for disk caching, and still have plenty to manage application space (memory and code). Mapping memory in RAM vs virtual space will consume the same amount of CPU cycles.

Ultimately the only thing to suffer with too much memory is wasted energy.

Leo
  • 1,008
  • 1
  • 8
  • 13
  • This was my suspicion, but wanted to see if anybody else though there was something to what he said. Thanks for your input. – SqlRyan Nov 01 '10 at 16:53
  • 1
    RAM stick can consume around 6 to 20W of energy. Even with 20 sticks and upper bound it's 400W. Hardy a game-changer if we have single CPUs that can pull over 120W. – Hubert Kario Nov 01 '10 at 19:44
  • @HubertKario: Many CPUs don't spend most of their time at max TDP. And even if that system with 20 sticks of RAM was a quad-socket with 4x 100W = 400W say, half the power consumption would be coming from RAM. (Although I think 20W per DIMM is probably too high an estimate, at least these days.) Depending on cooling setup, it's even possible that too much heat from RAM is limiting the CPUs ability to turbo as much / as long as it otherwise would. – Peter Cordes Sep 21 '21 at 16:03
3

I found at least one scenario where you can have too much ram. Admittedly this is a software limitation, not a hardware limitation.

Java applications (like ElasticSearch) suffer when using more than 32GB of ram due to compressed object offsets.

Additional Information:

http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html

spuder
  • 1,695
  • 2
  • 25
  • 42
2

Assuming the CPU can actually use the RAM (it's not one of the special thresholds that sysadmin1138 mentioned), more RAM can't possibly hurt performance.

However, since you have a limited budget, there may indeed be some "optimal" amount of RAM -- if you spend more money on RAM, then you have less money for CPU(s) and hard drive(s) and IO. If something other than RAM is the bottleneck, then adding more RAM doesn't help performance (although it doesn't hurt performance, either), and it costs money that could instead be applied to opening up the bottleneck.

(I'm neglecting the cost the cost of electricity to power the servers and the cost of electricity to cool the servers -- those costs can have a big effect on "optimizing" hardware selection in a data center).

David Cary
  • 398
  • 3
  • 16
1

Just throwing this small piece of information into the loop

Startup/bootup time is always affected by extra ram - it has to be counted and on one of my servers a full reboot or startup takes 30 minutes even with fast boot on, just to count the extra ram (384Gb on this server)and this is even before bootup starts. Hopefully you will not have to reboot your server often, but I figured I would mention this since no one else did.
I agree with all the above in general and in most situations more is better with exclusioons considered.

Final Thought - Always remember the quote about never needing more than 640K memory that is attributed to Bill Gates

Skooter
  • 11
  • 1
  • 1
    Most modern servers will only do a full/time-consuming memory test on boot if you've either not configured fast-boot in your BIOS, the server's been physically powered off or you've changed something about the memory loadout such as add/remove/move DIMMs. Worst for me was the HPE G7 servers, they took SO long, but with their Gen8+ things got faster and now ESXi supports partial reboot via 'fast boot' option, which is very quick. – Chopper3 Feb 20 '20 at 12:39