I know that there are really three kinds of fragmentation that I need to be concerned about as a DBA:
Index Fragmentation in the SQL data files, including clustered index (table) fragmentation. Identify this using DBCC SHOWCONTIG (in SQL 2000) or sys.dm_ db_ index_ physical_ stats (in 2005+).
VLF Fragmentation inside of SQL Log files. Run DBCC LOGINFO to see how many VLFs are in each of your SQL log files.
Physical file fragmentation of the database files on the hard drive. Diagnose this by using the "Disk Defragmenter" utility in Windows. (inspired by this excellent blog post)
A lot of attention is paid to index fragmentation (see this excellent Serverfault answer from Paul Randall), so that's not the focus of my question.
I know I can prevent physical fragmentation (and VLF fragmentation) when the database is originally created by planning a reasonable expected data file and log size, because this fragmentation occurs most often from frequent grows and shrinks, but I have some questions about how to fix physical fragmentation once it is identified:
First of all, is physical fragmentation even relevant on an Enterprise SAN? Can I/should I use Windows Defragmenter on a SAN drive, or should the SAN team be using internal defragmenting utilities? Is the fragmentation analysis I get from the Windows tool even accurate when run on a SAN drive?
How big of a deal is physical fragmentation on SQL performance? (Let's assume an internal drive array, pending the outcome of the prior question.) Is it a BIGGER deal than internal index fragmentation? Or is it really the same kind of problem (the drive having to do random reads instead of sequential reads)
Is defragmenting (or rebuilding) indexes a waste of time if the drive is physically fragmented? Do I need to fix the one before I address the other?
What's the best way to fix physical file fragmentation on a production SQL box? I know I can turn off SQL services and run Windows Defrag, but I also heard about a technique where you do a full backup, drop the database, then restore from the backup to an empty drive. Is this latter technique recommended? Does restoring from a backup like this also build indexes from scratch, eliminating internal index fragmentation? Or does it simply return the page order to the same as when the backup was taken? (We're using Quest Lightspeed backups with compression, if that matters.)
UPDATE: Good answers so far on whether to defragment SAN drives (NO) and whether index defragmentation is still worthwhile on physically fragmented drives (YES).
Anyone else care to weigh in on the best methods for actually doing the defragmentation? Or an estimate on the length of time you'd expect it would take to defrag a large fragmented drive, say 500GB or so? Relevant, obviously, because that's the time my SQL server will be down!
Also, if anyone has any anecdotal info on SQL performance improvements you've made by fixing physical fragmentation, that would be great, too. Mike's blog post talks about uncovering the problem, but isn't specific about what kind of improvement it made.