Recently, I had our server admin tell me that the new servers we'd ordered with 140GB of RAM on them had "too much" ram and that servers started to suffer with more than about 80GB, since that was "the optimal amount". Was he blowing smoke, or is there really a performance problem with more RAM than a certain level? I could see the argument - more for the OS to manage, etc - but is that legitimate, or will the extra breathing room more than make up for the management?
I'm not asking "Will I use it all" (it's a SQL Server cluster with dozens of instances, so I suspect I will, but that's not relevant to my question), but just whether too much can cause problems. I'd always assumed that more is better, but maybe there's a limit to that.