15

I have to install a new Server with SQL Server 2008, What do you recommend, One server with Raid 10 or the Files in a NAS?

What about iSCSI should I use it?

What about SAN?

The server has 4Gb of RAM and that database file is about 2GB.

To make my self clear today the server has no RAID, I have to implement some kind of strategy so if something happend I can have my files safe, so What should I choose Local Files, NAS, SAN? What option has the most performance, what is the more secure?

Jedi Master Spooky
  • 552
  • 1
  • 7
  • 14
  • 1
    That's kind of like asking "How much RAM should I order in my new server?" -- it's an impossible question unless you do into some level of detail about your usage, requirements, etc. – Portman Apr 30 '09 at 13:02
  • Absolutely agree with Portman. Can you tell us more about the requirements? That's like asking, "What kind of car I should buy?" and not telling the salesman your needs. – K. Brian Kelley Apr 30 '09 at 13:25

7 Answers7

20

NAS

Definitely not NAS for SQL Server. SMB/CIFS does not have adequate support for file locking to support a DBMS (at least it didn't a few years ago, ca. 2002-2003). Note that NFS does and you can actually do this with Oracle on an NFS server. However, SQL Server on a CIFS share is not reliable due to limitations of the protocol. It may not even let you put files on CIFS mounted shares.

SAN

This is good for transactional applications as the cache on the RAID controllers can absorb quite large working sets. SAN RAID controllers will typically support more cache than host-based RAID controllers, particularly on high-end kit where a RAID controller might be a multiprocessor box that's just as powerful as a server.

SANs with dual controllers also have an architecture with no single point of failure and offer many options for hot back-up. This makes them a win from a manageability and reliability perspective. However they are expensive and constrained for streaming data volumes, although the latter is unlikely to be an issue on a transactional system.

For operational systems, SANs are almost always the best choice if available. They can also be shared between multiple servers running low-mid volume systems. However they come with a price tag that puts quite a substantial lower bound on the smallest system that the technology can be used with.

Direct Attach

In some cases, direct attach storage is best. One possibility is bandwidth constrained streaming applications, where the limited number of fibre channel connections will constrain the available bandwidth to less than might be possible with a high-end SAS controller. However, these are likely to be fairly specialised applications such as very large data warehouses where a shared-nothing architecture may provide the best throughput.

In fact, direct attach storage often better than a SAN for data warehouse systems for a number of reasons:

  • Data warehouses put large transient load spikes on disk subsystems. This makes them quite anti-social on SANs as they can affect the performance of other systems on the SAN.

  • The aforementioned streaming bottleneck.

  • Direct attach storage is quite a lot cheaper than SAN storage.

Another market for direct-attach storage is when you are selling to a market that will not pay enough money for a SAN. This is often true of applications sold to SMB customers. For a point-of-sale system or practice management system that will have six users a SAN is probably overkill. In this type of situation a small stand-alone tower server with some internal disks is a far more appropriate solution.

  • Is this still true nowadays? – Braiam Apr 23 '20 at 14:58
  • As far as I am aware it is. You can mount SQL Server mdf, ndf and ldf files on block storage but backup files can be on a CIFS share as well. This was the case as of SQL Server 2012 or 2014 and I'm not aware of it changing since then. – ConcernedOfTunbridgeWells May 13 '20 at 11:44
  • I was not referring to whenever we can or not, but about the performance penalty of using CIFS/NFS/SMB. – Braiam May 13 '20 at 13:07
2

Local or SAN is the only way to maintain performance. In some cases, SAN can be faster than local disk because of larger write cache and parallel disk throughput configuration.

Avoid doing any high performance file I/O over windows shares. There's a large amount of protocol overhead that will slow down throughput dramatically. For example, years ago I've measured a large file transfer over a WAN was ~50% faster using FTP over Windows shares.

spoulson
  • 2,173
  • 5
  • 22
  • 30
  • 2
    I'd avoid SAN unless it is dedicated to SQL, which isn't usually the case. SAN's are nice and fast for SQL until you have some other app hogging the write cache. – SqlACID Apr 30 '09 at 16:43
1

With a database size of 2Gb, there is no reason to even consider a SAN. A NAS will not work, you should only think of using a NAS for backups so you are not storing backups on the same machine as you data, and it's easy to make copies from the NAS for off-site storage.

With a small database, just use local disks in a RAID 1 or RAID 10. If your database fits in RAM, you don't have to be quite so worried about IO performance. Use 64-bit windows and put 8 Gigs in there. That will leave plenty for the OS and give you room to grow.

Eric Z Beard
  • 503
  • 1
  • 6
  • 12
1

Don't use NAS.

Either use local (OK for medium term with a good RAID controller) but if budget allows, go for a decent SAN. With luck you can start to "share" the SAN with other systems and reclaim much of the initial outlay.

4GB RAM is fine for most systems as long as it's a dedicated SQL Server (and it should always be). If you've not already considered it, use 64bit OS and SQL server so you can easily throw more RAM in without mucking around with PAE/AWE stuff.

Also think about virtualisation. You're going to need a test server? A dev server? Test the installation of SP1? (Shameless plus for earlier post: sql-server-in-vmware)

Guy
  • 2,658
  • 2
  • 20
  • 24
0

For a small 2GB database, a SAN would be overkill.

Generally speaking, you don't want your SQL drives to be shared with any other application. Likewise, the more spindles (disks) you can spread your files across, the better. Again, with a small database like you describe, you'll be able to fit everything you need in RAM, so disk performance will be less of a factor.

That said, don't go and create a single RAID-10 volume & put ALL of your database files on it. You could start with something simple like: Data - 2x 73GB 15K RPM, RAID1 Logs - 2x 73GB 15K RPM, RAID1 Backups - single large 7200 RPM or a pair in RAID1

If you have the budget to upgrade, add another seperate volume for TempDB (if you do any complex reporting / analytical queries) or give the Log volume more disks & configure it as RAID10 if you're system is more transaction w/ a high volume of updates.

A SAN would likely cost 3-4x as much as direct-attached storage for an equivalent number of drives. SANs do provide many advanced capabilities for backup/snapshotting, clustering support & LUN migration and expansion. If these are important to you, it may be worth the added expense.

0

Disclaimer: There are many serious issues with storing SQL Server database files on a NAS. All other options being equal, it is correct to choose the SAN option. A detailed discussion of the relevant issues is in MS KB304261. Yet this thread has become a catch all for other questions regarding SQL Server on a network share, I'd rather post references to the state of NAS support in SQL Server versions.

Quite a few people now experiment with using NAS with MS SQL.

This used to be prohibited by default, however one could lift the restriction in MS SQL 2008 and MS SQL 2005. Since MS SQL 2008 R2 there is no such restriction.

In MS SQL 2005 and 2008 the key is the trace flag that prohibits the use of network shares. One can issue

 DBCC TRACEON(1807, -1)
 CREATE DATABASE [test] ON  PRIMARY 
 ( NAME = N'test', FILENAME = N'\\storage\mssql_db\test.mdf' )

More details in the September 2010 post in the MSDN blog of Varun Dhawan.

At least technically running SQL Server on a NAS is possible. The choice depends on many factors and it is not hard to imagine a situation where storage for a database is readily available on a NAS.

Dmitri Chubarov
  • 2,296
  • 1
  • 15
  • 28
  • Possible does not mean advisable; this question is really asking if one should host a MSSQL DB on a NAS device, the answer to which is almost universally no. You are correct that it's possible by default in 2008R2, but it's still ill advised. – Chris S May 30 '12 at 13:23
  • @ChrisS This thread has become a catch all for general questions on SQL Server on NAS, new questions being often closed as duplicates. I have added a disclaimer to reflect that it is not advisable. – Dmitri Chubarov May 30 '12 at 13:36
0

I work with systems that use both local RAID disk as well as fiber connect SAN. The SAN appears to perform better even with the former being a newer and faster machine. I think a significant factor is that the local disk arrangement is a single drive so SQL is sharing disk I/O with the OS and swap file. This is likely contributing heavily to the drag.

As @spoulson mentioned, the SAN can provide far better disk performance. Not only is it a separate drive but it's likely on much faster disk hardware.

In our case, we are using SAN because it provides a centralized storage area for auto failover VERITAS SQL Server service groups. When the primary machine fails, the windows drives mounted on the SAN get remounted to the hot backup machine and the server comes back up pretty quickly. Of course, this requires a system similar to VERITAS for service group management which might be outside your scope.

The one caveat we've struggled with is that the SAN disk space assignment is handled conservatively. Because it's expensive they don't dish it out on a whim. With the EMC SAN we use, you can't reclaim assigned space without dumping an entire LUN. So if we find the allocated space is more than we need and we want to use some of that space for another LUN, we can't just shrink the oversized one. We have to drop it and reassign. This doesn't work so well in a production environment.

As others have suggested, avoid NAS. You might as well put your data files on a USB external hard drive.

Peter
  • 218
  • 2
  • 8
  • If your EMC SAN is a CX4, later this year EMC is releasing an update to the OS which will support shrinking LUNs to go with Windows 2008's ability to shrink a disk. – mrdenny May 20 '09 at 15:10