3

A question for those who know about HP server performance and SQL...

I am trying to spec a suitable server that has the following requirements:

  1. Run SQL 2012 Standard Edition (I cannot get approval for Enterprise edition)
  2. Main database is currently 600 GB, allow for growth up to 1,200 GB over next 3 years
  3. Log files for main database is currently 120 GB, allow for growth to 300 GB
  4. OLAP database is 60 GB, allow for growth up to 120 GB
  5. Reporting Services is about 5 to 10 GB

Windows 2012 Standard Edition allows >32GB (yay!) and SQL 2012 Standard Edition allows 64 GB RAM usage, so I was thinking that a server with 96 GB of RAM would be sufficient for the OS, SQL, OLAP, and RS...

Now my concern is disk space requirement, I would like to put each component on its own Raid 1 or Raid 1+0 volume, so by that measure I would need:

  1. 2 x 300GB 15K RPM 6GB DP SAS disks in RAID 1 for Boot OS (300 gb usable)
  2. 8 x 300GB 15K RPM 6GB DP SAS disks in RAID 1+0 for main database (1,200 gb usable)
  3. 2 x 300GB 15K RPM 6GB DP SAS disks in RAID 1 for log files for main db (300 gb usable)
  4. 2 x 300GB 15K RPM 6GB DP SAS disks in RAID 1 for OLAP
  5. 2 x 300GB 15K RPM 6GB DP SAS disks in RAID 1 for Reporting Services
  6. 2 x 300GB 15K RPM 6GB DP SAS disks in RAID 1 for Temp DB

OK, I know that Reporting Services and Temp DB do not need 300 GB each, but I think there is something to be said for having all the same disk size

Since the HP DL380p-Gen8 only has 8 disk slots, I would need an external SAS drive array such as the D2700 which would hold the remaining 10 disks.

I am not concerned about having too much disk space.

My main goal is to get the maximum performance out of this server.

So my primary question is: Is the IO throughput between the DL380 and the D2700 sufficient? It appears it only uses a single mini-SAS cable to connect the external drive enclosure to the server.

My secondary question is, does the overall server spec confirm to good/best practice?

I have been given a ballpark budget limit of $60K, which I think is insufficient, so I may be able to ask for a bit more if I can justify it.

Many thanks in advance!

ewwhite
  • 194,921
  • 91
  • 434
  • 799
ChrisNZ
  • 606
  • 2
  • 9
  • 24

4 Answers4

4

I dont know aobut HP but the disc layout smells "SLOW" to mee.

Seriously.

First - separate RAID 1 for everything means that if something is not in use, the IOPS is - wasted. I would go with one RAID 10.

Second, you use expensive 15k ROM discs and - whow - would get about 100 times the IO Performance - for a lower price - with SATA based.... SSD.

So, I would personally make sure you don't go on a hirig spree in my company ever again after this proposal. It is very state of the art - for the year 2000. Now we have some years later.

With the DL 380p having 8 drives I would go with an ALL OUT SSD SETUP, using 480GB Samsung 843T enterprise level SSD. Raid 10. THat is 1200gb rougly usable space. with 8 drives, 1600 with 8 - and the performance will fly around your more expensive setup. Buy 9-10 drives so a replacement is there and there you go. Not sure the raid controller will be able to handle that bandwidth ;)

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • I understand your point, but my understanding was that SSDs were not altogether ideal for high-write scenarios. The main db and log db will have quite high amount of disk writes. The OLAP cube will only be rebuilt once a day, so it would make sense to put that onto an SSD mirror. – ChrisNZ Dec 05 '13 at 08:18
  • 1
    Ah, yeah. Now I suggest you check facts. The SSD I mentioned is good for 5 total overwrites per day over the assumed lifetime of 5 years. I don't know what your databases do, but mine KEEP Data, so they dont go off and overwrite the whole dicsc 5 times in a day. And even if - you need to replace it all every year (!), it STILL comes cheaper than those 15k SAS discs compared how many you need for the same performance. Basic match - most IT People fail at the "SSD wear out" level without going through the whole logical chain. – TomTom Dec 05 '13 at 08:20
  • We load 10s of thousands of new transactions a day, during trading hours, then run sustained sequentual-write operations at night during the batch processing. Just did a google search and found this video: http://technet.microsoft.com/en-US/video/Hh771099 The narrator advocates extensive partitioning, which sounds like quite a number of SSDs will be needed in our case. Also, we are an HP only shop, so Samsung is out of the question. The HP 400GB 6G SAS SLC SFF (2.5-inch) SC Enterprise (part no 653082-B21) are $7000 USD each! – ChrisNZ Dec 05 '13 at 08:40
  • 1
    @ChrisNZ - many people complain about HP's SSD pricing, I understand this but they have much more over-commit space than the vast majority of other SSDs - as an example I have a bank of HP BL460c Gen8 blades with 6 x 400GB SLC SSDs in R10 - they each have around 172GB/day or writes to them and although each day their usable lifetime drops (it's a setting you can check via the HPACUCLI tool) they still have something like 27 years left at the current rate. TomTom is right that you would see enormous benefits using SSDs but I would caveat that by saying if you do then don't go cheap ok. – Chopper3 Dec 05 '13 at 08:58
  • @ChrisNZ - oh and if you really can't afford SSDs then simply create one large R10 with those disk and carve them up into logical disks, at the very least it'll mean you can assign appropriate sizes to each volume rather than what the disk size dictates. – Chopper3 Dec 05 '13 at 08:59
  • Yes. THe main problem here is that by isolating the IOPS budhget you waste unused parts. And a HP Only shop - I am sorry for you. Seriously. Like, ever since I got my hands on the new adaptec controllers I love using a set of SSD as read and write cache.... something a "hp only shop" likely will miss many years. And CHris - we load about half a billion trades per 24 hour period, as our database is the backend of a financial analysis cluster running pretty much nonstop. NO issues. – TomTom Dec 05 '13 at 09:43
  • These answers and comments are all very helpful, thanks! It sounds like a bunch of SSDs in RAID 10 is definitely the way to go. I'll see if I can find a way to introduce such a solution into our shop... :) – ChrisNZ Dec 05 '13 at 10:47
  • You definitely should. And try to break the "HP only" stuff - especially on SSD. THe Samsung that I mentioned is top of the line at the moment outside a ridiculously priced enterprise market (so stupidly priced you are cheaper replacing it with samsungs every year). – TomTom Dec 05 '13 at 10:52
  • See my question and answers regarding the resiliance of SSD's in 2013. I was sceptical of SSD's too but now I'm sold on them. http://serverfault.com/questions/507521/are-ssd-drives-as-reliable-as-mechanical-drives-2013. Spinning media is only good as a file store or for infrequently accessed data where speed is less of an issue. Or where you need huge volumes of data at a lower price. For anything requiring performance SSD's are the way forward. – hookenz Dec 05 '13 at 20:03
  • Totally agree TomTom, any new Database Setup should be using SSD's as the storage medium wherever possible. – hookenz Dec 05 '13 at 20:05
4

I'm trying to avoid the discussion going on in the comments, so I'll throw in my ideas here..

  • Do not use one large RAID array with busy SQL servers. There are very good reasons to physically separate data, logs and tempdb on different spindles. You do not want I/O queues against the same array when doing OLTP or any other kind of transactional-heavy environment
  • SQL servers greatly benefit from SSD's, as they nearly always write/read in a sequential matter. The increased bandwith helps tremendously. Make sure that you use dual-port SAS SSD's, do not use the SSD's with S-ATA interfaces.
  • Memory (RAM) is still the most important factor of how fast your queries will run. The more memory, the more data and execution plans cached. Do not underestimate this. 64GB of RAM can quickly be a limiting factor in the future, so factor in that you might want to upgrade to SQL Server Enterprise one day. Leave room for RAM upgrades in other words.
  • Controller cache is super important for writes in sequential workloads. Read cache only helps when you either have hotspots (not very common with SQL), or when the controller is smart enough to read-ahead on the disks.
  • Having a separate array just for analysis services and reporting services seems a bit overkill. It does however depend on your specific situation. Only you can answer how much IOPS/response time you need for each component.

That being said - if you do go the SSD route then I'd suggest this as a minimum:

  • 2 x 300GB 10k in RAID1 for OS+SQL program files (No point in wasting money on SSD for this)
  • 8 x 400GB eMLC SSD in RAID10 for DB-data/OLAP/Reporting (get the SSD with the fastest READ iops/bw you can get)
  • 2 x 400GB SLC (or eMLC) SSD in RAID1 for DB-log (SLC are expensive, but very trustworthy. Use eMLC if you can't afford it)
  • 2 x 200GB SLC (or eMLC) SSD in RAID1 for TempDB (same as above)
pauska
  • 19,532
  • 4
  • 55
  • 75
  • 1
    To throw more fuel on the optimization fire - with today's modern SSDs it's possible for an SSD to saturate the disk controller's port (i.e. "The disk is faster than the thing it's plugged in to"). If you have a *very* high-volume environment you may want to ensure that your transaction log array and your data array are on separate controllers/ports. It would be an unlikely confluence of events, but if we're optimizing let's go all the way and make sure the high-volume I/O makers have as much headroom as possible :-) – voretaq7 Dec 05 '13 at 19:59
3

This is a bad design... Part superstition, part misunderstanding of how storage technologies have evolved.

But there's hope!

  • You should be looking at the 25-bay HP ProLiant DL380p Gen8 server. It accommodates 25 x 2.5" disks on a SAS expander backplane. That elminates the need for an external D2700 storage enclosure.

  • The sweet spot for 2.5" SAS enterprise disks right now is 900GB. You can get them in 300GB, 450GB, 600GB, 900GB and 1.2TB capacities nowadays. 900GB disks are relatively cheap now.

  • HP Smart Array controllers allow you configure multiple logical drives per group of physical disks. Something like 16 spindles of 900GB drives (in one array) could be carved into the respective logical volumes you need. That way, you get the volume isolation you need, but the collective I/O capabilities of 16 or more disks.

  • HP controllers have the ability to leverage SSDs as read cache to back a drive array.

  • Going with all spinning disks at this point today is old-school and won't be the most effective use of resources.

  • You really can't use third-party SSDs in Gen8 HP servers.

  • I could build an HP spec for $25,000 that would maximize the performance of the platform.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1
    Only problem is that he does not NEED any volume isolation at all. And even IF he needs it, he can easily do the same with multiple partitions - no need at all to carve up in the Raid controller. – TomTom Dec 05 '13 at 17:27
1

HP ProLiant DL380p Gen8 16-bay.

2 x 300GB 10k raid 1 OS
8 x 300GB 15k raid 10 mainDB
1 x 128GB Pcie fusion IO card enterprise class for temp DB
2 x 600GB 10k raid 1 for all logs
2 x 300GB 10k raid 1  for reporting and OLAP
2 x 900GB 10k raid 0 flat file backup

Two raid controllers with write back and battery 1GB memory

Make sure you put the raid 10 on 1 and the rest of them on the other card and all SAS.

Karl P
  • 11
  • 2