4

we have a medium sized SQL db of around 40GB, and are about to move it to a new box with server 2008 r2 x64

an expensive contractor has just come in and said we need 72gb of ram so that the whole DB can live in RAM i guess.. , I find that ludicrous, but maybe I am missing something.

I was wondering if there is a 'rule of thumb calculation' that hints at a good amount of ram to use on a SQL instance..

EDIT: DB has up to 150 people hitting it at any one time, used to run a booking system. very nasty old DB schema, hideous amounts of columns in alot of the tables, vast arrays of semi pointless indexes. accessed by .net hitting some sprocs, but also querying with raw sql, and whatever VisualDataFlex uses.

thanks

nat

nat
  • 163
  • 2
  • 7
  • Not enough information. Solution depend on many things, required performance, data access modes ... For what this DB is used? – mmv-ru Sep 14 '12 at 11:42

1 Answers1

9

There is no good rule of thumb.

The usefulness additional RAM depends very much on how your database is designed and the query patterns or your users. As an extreme example, if you have 1 TB of data in total, good indexing and 1000 simultaneous users that only ever go after 1 particular row (select * from dbo.BigTable where pkey_id = 42), you could get by with very little RAM. The impact of more RAM can also depend on how fast your disk storage subsystem is. If you have super fast storage (a good SAN, good DASD or SSD) you may not notice lag when the server needs to read data.

Ideally, you want all of the data that the database is likely to need during the day cached in RAM. This is sometimes referred to as "hot data" or "hot pages" (In SQL Server, data is organized in units called "pages"). An example of hot data might be orders taken today or yesterday, that will be needed by workers shipping those orders. An example of cold data might be orders taken two years ago, but that still exist in the system so that CSRs can look up old orders.

With a well-designed OLTP system with 40 GB of data in total (as opposed to data files that add up to 40 GB), it might be possible that the hot data only comes to 10 GB, 5 GB, 1 GB or even less. In olden days, the difference between only buying 64 GB of RAM and 8 GB (or even less) was astronomical and getting the amount of RAM to be purchased right was worth spending a lot of time on.

You always want a little extra RAM for "overhead" like the OS, virus scanners, RDP sessions. You also want to take database growth into account.

Another thing to keep in mind is that RAM comes in "chunks". You can't decide between 48 GB and 49 GB, you have to step up, probably from 48 to 64 GB. The size of the chunk depends on the RAM technologies currently being marketed and how many channels your memory has. Old servers had one change to memory, then servers started having two and now most beefy servers have three memory channels. So, you can't just add one stick, you need to add two three at a time.

If you have a poorly designed and badly indexed database, as you say you do, SQL will wind up reading lots of data that it would not have to with a better designed database. Having that data all in RAM doesn't mean that it doesn't read through it all. It just means that it reads through all of the data in RAM faster than it could if the data was sitting on disk storage. That isn't necessarily going to fix all of your problems; reading data from RAM is fast but still takes a finite time and you can still get blocking and deadlock issues which can cause performance problems.

Another thing is that more RAM helps when people run large reports on the system that do utilize last year's data. People treat OLTP systems like reporting systems all the time. This may not be a rare occurrence, particularly with haphazard SQL queries.

You could spend time (and money) on developers and testers to improve the database design and make whatever changes to the front end that this would require.

A very quick trip to hp.com shows that I can get a (very) stripped-down server with 96 GB of RAM for about $9,000 US.

(That's less than $100 US per GB, without factoring out any of the other parts in the server. Parts like the power supply or the processor. Old people like me remember when RAM was $5,000 US per Mega Byte. With an M.)

How fast can you spend $9,000 US on development and testing time? (At corporate rates in my area, that would barely get you one whole person for one whole month.) Will that be enough time to fix everything in the database? What if a bug in the changed code slips through?

A server outage to add RAM might take an hour, and nearly anyone can do the work. Migrating onto a totally new server might take a couple of days of prep work and an hour of down time, and you will need someone with experience (a DBA, probably).

Adding RAM is unlikely to cause any bugs. Migrating to a new server is unlikely to cause any hard to find problems. (Generally, stuff either works or it is obviously broken. It's usually "oops, wrong password" or "oops, forgot to copy a linked server", it's never "hey, this calculation suddenly doesn't work correctly".)

So, assuming that we are going to keep everything in RAM and we have a 40 GB database, we want some room for overhead. I'd be thinking a server with 48 GB or so. Plus, we want some room for growth. Let's just say 25% growth from here, which is 12 GB and takes us up to 60 GB. The next level up (which I can actually buy) is 64 GB. If your consultant is recommending a triple-channel server, the next level up might be 72 GB. So, his suggestion isn't necessarily outlandish. Maybe you don't need as many cores/sockets, and that might drop you back down to a dual-channel server (which would be cheaper anyway) and you could buy a little bit less RAM.

TLDR:

In short, which I rarely am, RAM is cheap and time is expensive. If you don't have the money, time or inclination to fix the database or you don't want to risk adding bugs to your application, throwing RAM at the problem is hard to question. The other good move is to improve the responsiveness of the disk storage system (IOW, add more spindles, more RPM or go with SSD).

I would be asking why the consultant didn't want you to spend a year and a $100,000 US for him to rewrite the system.

Darin Strait
  • 2,012
  • 12
  • 6