5

A system we are developing consists of a Web app frontend, and a backend that does a lot of data processing using stored procedures in SQL Server 2008 R2 (please, don't ask why...). These stored procedures make heavy use of temp tables (creation, insertions, joins), so that tempdb i/o rate is high in writes and reads. Our clients need speed, so we are about to recommend the following:

  • Buy a server with a RAID 1 SSD array for storing the main database (maybe RAID10 if they have the money), using another hard drive for the OS and SQL Server installation, so that vital data is stored with replication in a fast drive, and 64 GB RAM.
  • Use a Ramdisk for storing the tempdb database, so temp tables (the biggest performance bottleneck, we think) are processed in RAM.

Some context data:

  • Our database uses no more than 10 GB, with a very low expected growth rate. Tempdb usually grows up to no more than 2-3 GB.
  • The server will be used for the DB and the Web Server.
  • The Ramdisk software can mount the ramdisk at windows startup.

We have tested the ramdisk approach in a laptop with a lot of ram. The speedup is remarkable (stored procedure execution times reduced to 1/3) at least.

I need help to determine whether this is a good solution or not, and to detect any flaws (obvious or less obvious) that I might be missing.

EDIT: Thanks for the answers so far! I forgot to mention explicitely that there will be concurrent users using the application, so there will be multiple temp table operations running. Also, Mixing web server and DB server is not our choice, we already know it's not optimal ;)

mramirez
  • 51
  • 1
  • 4

4 Answers4

1

It's not just the rate, it's the wait. Benchmark properly. Check the IOPS, plus the disk queue length. Use Perfmon and SQL profiling. Go ahead - I'll wait.

You already know that the OS should be on one set of spindles, MDFs another, LDFs another, and tempdb files yet another, if you do have actual performance concerns. If you can't commit to doing that, benchmark it and find out your priorities. Also, the different read and write patterns can dictate different RAID levels for each of those.

You may find out that standard disks with the right RAID configs can get you where you need to be, and not plump down for enterprise SSD. Although, if tempdb is getting hammered enough, a single SSD might be a good fit for it. Probably no need for RAID for performance, although for redundancy it might be a good idea. Depends on your budget and how long you can be down, of course.

You also know that the SQL server should be separate from the web server, right? If performance is a concern? Even if you're not having a problem now, if you grow, you'll have a difficult time determining which is being hammered harder and what the appropriate fix is.

mfinni
  • 35,711
  • 3
  • 50
  • 86
  • 2
    I agree with 99% of what you're saying, although I would tend to build one big RAID 10 over a lot of little luns. All those little LUNs are great at seperating work loads, but at the same time they become a bottleneck in themselves. You could have you logs getting hammered, while your DB drives are doing nothing. Anyway, that's MHO... – Eric C. Singer Apr 16 '13 at 18:14
  • 1
    Sometimes I wonder about the mantra of seperating out the web\app from the sql server. Given the large memory models and fast cpu's that we can throw at these days, in my case we see network issues prior to seeing memory bottlenecks. Nothing like the bandwidth of memory vs networks.. – tony roth Apr 16 '13 at 18:14
  • Eric - that's certainly a valid approach for many workloads. – mfinni Apr 16 '13 at 20:03
  • Tony - it's often more about scaling. If you have a website/app that's not a heavy-hitter to the DB, but you start getting a *ton* of traffic to the web and need more threads/CPU/RAM just for Apache/IIS, how do you do that if you've got an all-in-one server? How do you add more resources only at certain times and take them away at others? How do you add a DB caching layer? Etc. For a simple LOB app or internal site, those might not be problems. – mfinni Apr 16 '13 at 20:06
  • ymmv is quite true in this case, I just laugh when I hear that its an absolute requirement to seperate the sql server from the application. – tony roth Apr 16 '13 at 20:23
  • Certainly not an absolute requirement. But given what the question said, I think "should" was probably warranted. – mfinni Apr 16 '13 at 20:32
  • Nothing stated in the question would sway me to think that a local install would not be best. With os's prior to w2k8r2 yes these issues would occur quite frequently. But I have MB's that have the correct amount of cpu/ram ratios that support more then a half TB of ram with sql servers using every inch of that and if they were on the other side of a 10gig pipe they'd be dead, even multiple 10gig pipes wouldn't be good enough. Its true I prefer not to use sql for these types of db's anymore but sometimes thats the requirement. – tony roth Apr 16 '13 at 20:53
0

RAID is for redundancy, performance goes out the window. For e.g. RAID 5 to read a piece of data all the component disks must be read, and the parity checked (that is slower than reading from a single disk, the head movements won't necesarily be synchronized, thus you are waiting for the longest of the set, not just the average), writing means reading all, computing parity and writing new data and parity, clearly slower than just writing.

Yes, a good RAID implementation and smart operating system can mitigate this a lot (must do, even single disks are horribly slow with respect to RAM, so any operating system worth its salt does extensive caching irrespective of the disks).

Yes, a smart DBMS will also cache data in RAM as much as possible (respecting the promises made with respect to data consistency, failure resistance and so on; where required it will explicitly wait for data to be safely on disk before going on).

For any DB a RAMdisk is pure poison ("data explicitly written to disk, thus safe" isn't).

vonbrand
  • 1,153
  • 2
  • 8
  • 16
  • 1
    Guess you've not heard about wide striped arrays, or even RAID 0 or RAID 10? – mfinni Apr 16 '13 at 20:09
  • Microsoft accepts as a viable practice the use of a storage solution such as a RamDisk for tempdb, provided the storage solution complies with SQL Server I/O Requirements, not so for other db: "*All I/O subsystems should provide full compliance with the SQL Server I/O requirements. However, it is obvious that a RAM disk is not durable media. Therefore, an implementation such as a RAM disk may only be used as the location of the tempdb database and cannot be used for any other database*". [Here's the link](http://support.microsoft.com/kb/917047/en-us) – mramirez Apr 17 '13 at 17:42
0

Thanks for all the answers. They have been very helpful. After some subsequent research, I found that I/O speed was not the main bottleneck in this particular case, although it's important in general. Best practices in tempdb management include having at least 4 data files. Microsoft also recommends 1 data file for each cpu core. Having more files helps reducing some flavors of contention problems.

Some links about this:

mramirez
  • 51
  • 1
  • 4
-1

so that tempdb i/o rate is high in writes and reads

Too little RAM. tempdb only IO's when it overflows - otherwise SQL Server does not dump the tempdb pages to disc.

So, a RAM disc will not help - rather put in more memory.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • 1
    I'm not sure that's exactly correct, although the "more RAM" is almost always the correct answer with SQL server. http://support.microsoft.com/kb/917047 "Temporary tables are like all other tables in a database. They are cached by the buffer pool and handled by lazy write operations" – mfinni Apr 16 '13 at 20:22
  • 2
    Actually, we tested our system in a server with 128 GB of memory, exclusively dedicated to this test, with max memory set to highest possible. Our DB size is less than 10 GB as I mentioned. Even with that configuration, tempdb was the bottleneck. Temp tables are always persisted to disk. – mramirez Apr 16 '13 at 21:10