6

We have a server with a RAID volume. Windows DEFRAG shows very high fragmentation (90%) on the volume. My chief asked if DEFRAG reported fragmentation is correct (or near correct) or not.

We don't have any defrag being made for a long time (at least, not in the last 4 months, the time I work here). It's a production server and we are very worried about it.

Fabricio Araujo
  • 237
  • 3
  • 11
  • 1
    What type of RAID are we talking about? – Urda Feb 17 '10 at 20:56
  • What kind of server is it? you need to investigate the cause. You will most likley be able to prevent or reduce it – Nick Kavadias Feb 18 '10 at 00:17
  • Short answer- file system fragmentation is fragmentation, whether it is on a single disk partition or spread across any combination of RAID, spanning, virtual disks, etc. The files are not contiguous and that slows down read/write cycles. – kmarsh Feb 18 '10 at 13:58

4 Answers4

9

Defrag will report the fragmentation of the logical disk: What this means in terms of how your data is scattered on the physical disks in the array depends on what kind of RAID (0, 1, 5, etc.) and a little on the internals of your controller.

Generally you can probably treat it like you would any other hard drive (i.e. "90%?? For the love of Dog defrag it!"), though at 90% that may be a painful experience.
Also of note: Defragmenting is obviously very disk-intensive. If these are original disks you may want to make extra sure that your backups are good before defragging, just in case the defrag convinces the RAID controller that one or more drives are "failing".

voretaq7
  • 79,345
  • 17
  • 128
  • 213
  • 1
    HUH? "convinces the controller that the disk is failing"? How's that? – Fabricio Araujo Feb 17 '10 at 21:23
  • 1
    He is saying that defragging the hard drives MIGHT cause the RAID controller to Fail a disk. I have seen it happen myself. – steve.lippert Feb 17 '10 at 21:36
  • 2
    Most RAID controllers will mark a disk as failed after a certain number of soft/correctable errors in a set period of time. Lots of disk activity from a defrag + a marginal disk = enough errors to mark the drive "failed". That happens to enough drives & your array blows up (ask me how I know :-/ ). Sometimes you see it during RAID rebuilds on old arrays too. – voretaq7 Feb 17 '10 at 21:37
  • So that's the risk of leaving so much time with no defrag. It's better to open another question to ask what are common precautions/good practices to reduce pain when such thing happens or not? – Fabricio Araujo Feb 17 '10 at 22:11
  • It's more a risk of lots of activity with any aging disk. Without knowing how old the server is I err on the side of paranoia & assume it's an "original vintage" Win2K3 box (at least 5 years old) & worry about the disks – voretaq7 Feb 17 '10 at 22:28
  • If you're asking about failed drives due to defrag, shouldn't you first ask about the state of your backups for a heavily used and old server...? If the defrag can get the drive over the edge of the bit bucket in the sky, I think just using the drives risks the same thing at a less opportune time... – Bart Silverstrim Feb 17 '10 at 22:43
  • If the server is heavily used and the system is old, I'd make sure backups are good and then try running "mydefrag" on it. It's a freeware application, I've had no problems with it, it uses the Windows API for handling the defrag operations, and is relatively fast and can be used as a screen saver to "tune" the disk at other times and help keep it cleaned up if deemed necessary. – Bart Silverstrim Feb 17 '10 at 22:44
  • (I'd also add that by heavily used, you shouldn't run the defrag while in use, necessarily, if disk access speed is critical in your environment, as any disk intensive activity can slow things down... :-) ) – Bart Silverstrim Feb 17 '10 at 22:45
5

Yes it does report correctly.

I should point out that the only time I've seen such severe fragmentation on a windows volume is when Shadow Copies are being stored on the volume.

Find some space on a drive that isn't fully partitioned and move the shadow copy storage area to a freshly created volume that is dedicated to holding the shadow copies and see if your fragmentation drops by a huge amount without even defragmenting the volume that is currently showing ~90% fragmentation.

Assuming you are using Shadow Copies they should never be on the same drive as the source file they are copying.

If you aren't using Shadow Copies the next most likely culprit is a Backup application like Backup Exec storing backup files in too small of a chunk.

Though really any program that creates medium to large size files and then deletes them on a regluar basis could create the same situation.

pplrppl
  • 1,242
  • 2
  • 14
  • 22
  • Depending on how old the server is (I see it's Win2K3) I've seen a few file servers that actually got this bad "naturally". Usually the disks were over 75-80% capacity & very active... – voretaq7 Feb 17 '10 at 21:47
2

RAID systems shouldn't have any effect on the fragmentation count in windows. The raid system presents a disk to windows. The file system (where the fragmentation is calculated) is built on top of this.

pehrs
  • 8,749
  • 29
  • 46
1

Windows defrag just uses a defragmentation API which is built on top of a logical filesystem which in turn will sit somewhere above the HAL; at this level the underlying hardware really doesn't matter: so long as your device drivers are doing their job correctly the reported fragmentation will be at worst consistent irrespective of the app used.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36