2

A white paper published by Diskeeper Corporation states at page 3 that,

The most common problems caused by file fragmentation are:

a) Crashes and system hangs/freezes
b) Slow boot up and computers that won't boot up
c) [...]
d) File corruption and data loss
e) Errors in programs
f) RAM use and cache problems

How accurate are these statements (especially the ones about crashes and aborted bootups)?

splattne
  • 28,348
  • 19
  • 97
  • 147
Dan Dascalescu
  • 590
  • 1
  • 9
  • 21

4 Answers4

5

The only one of those I've personally observed is the first half of (b) - slower boot up and indeed generally slower file access. My personal experience is that there is a measurable difference between a well defragmented disk and a badly fragmented disk. How much it actually impacts people however I'm not so sure.

Cry Havok
  • 1,825
  • 13
  • 10
4

Common is the wrong word.

In my experience heavy fragmentation just slows things down. (Slow boot, short freezes.) Sometimes to the point of applications timing out which causes software instability, because the application doesn't expect this to happen. This in turn may lead to point a), d) and e), but only as a side effect.

Additonally on a very heavily fragmented disk that is also 99.999999999% full file corruption may actually be an issue as the filesystem itself runs out of elbow room to do it's work but in general you will consider the PC to slow to be usable long before you reach that point.

As for f): RAM use for caching will in general not increase, but the efficiency of the caching will take a plunge downwards.

In general: As of Windows XP the NTFS filesystem is quite good if keeping fragmentation limited to reasonable levels all by itself. YMV, but in my experience for most use-cases (at home or on servers) there is no real need for continuous defragmentation like DiskKeeper wants to sell to you.

For intensively used (lot's of new files/modified files/files deleted) file-servers it's another matter: A low-priority defrag job running in the background can really help to keep system response-times stable over a long period of time. That is if the server is not constantly in use at this intensity for 24/7. The defrag software will need to have a chance to do the job. If it can't do it's job properly it's only making maters worse. In such cases it's often more efficient to dump the entire filesystem to tape (or another disk, HDs are cheap these days), format the filesystem and copy everything back. Do this every X weeks, where X depends on when the performance loss becomes problematic.

Tonny
  • 6,252
  • 1
  • 17
  • 31
0

I would also agree that it only affects performance, but frankly, that is mitigated by the fact that drives and processors are so much faster. In the old days, when drives transfered at 66/100/133MB/second, it was probably more noticeable, but today's drives are so fast, that I would think you would be measuring the differences in performance in milliseconds...in other words, not noticeable.

You are better off just using the defrag program that comes with your OS and schedule defrags once a month, if your OS doesn't already do that automatically like Windows 7.

KCotreau
  • 3,361
  • 3
  • 19
  • 24
0

Diskeeper does provide a lengthy justification for each of those points in the remainder of the paper, and does list the Microsoft articles that it is basing its analyses upon. (This is, of course, a Windows-specific white paper.)

Addressing aborted bootstraps specifically:

Yes, this is a problem. Indeed, it is one that isn't even confined to Windows NT. There are several operating system boot loaders that require that (a few, not all) operating system program image files be contiguous, because no filesystem driver has been loaded yet and the code to read off the disc is very simplistic and can only cope with contiguous files (treating them, essentially, as a single multi-sector read operation). This is the case for the FAT volume boot records of many operating systems, and is the reason that, historically, system files such as IBMBIO.COM (PC-DOS and DR-DOS) and IO.SYS (MS-DOS) have needed to be contiguous.

Of course, once one has loaded filesystem driver code that is capable of completely understanding the on-disc data structures, which can happen very early on in many operating systems, fragmentation stops being a fatal problem and becomes merely an I/O performance issue. So it's only an ABEND in the cases of a few operating bootstrap files; and generally effort is made to make those files contiguous on disc when they are first written, in any case. (Reserving contiguous space so that this would be true is what the /B option to the MS/PC-DOS FORMAT command was all about, for example.)

JdeBP
  • 3,970
  • 17
  • 17