How to measure performance gain with defragmentation?

6

4

I believe I understand the basic concepts of fragmentation of data on hard-drives, and the concept of defragmentation to counter the effects of this. What I don't really understand is how one actually measures the performance gain by defragmenting files.

Some say the system reacts "snappier" or that things load faster after running a defragmentation. I don't feel this is always the case. I've run defrag many times on different pcs without noticing any noticeable performance gain at all.

So I'm wondering, is there any way to actually measure how much performance difference is before / after a defragmentation, and what the ACTUAL impact on the systems performance is?

**Update:**What I'm looking for is a tool that can give me some concrete indication of overall system performance improvements. Is this achieved best through benchmarking tools specific to HDD access speeds? Or will gain the best result through an application like File Access Timer from Raxo? Also I'm running windows XP

pavsaund

Posted 2009-07-28T20:42:36.813

Reputation: 2 660

Answers

5

Measuring the performance gain from defragmentation is rather difficult, however there are some utilities that are meant to "aid" you with it.

There's the utility called File Access Timer, from Raxco, available here. This tool will read a certain file/folder a set amount of times and display how long it took, along with the fragments.

Excerpt from the readme

The File Access Timer allows you to select a file or folder and read the contents several times in order to measure the performance gain achieved through defragmentation. The general process is to select a fragmented file, read the file using Raxco's File Access Timer, defragment the file, and re-measure the time needed to read the file. By doing this you can see the benefits of defragmentation for yourself.

Thor

Posted 2009-07-28T20:42:36.813

Reputation: 3 562

I think you meant "Excerpt from the readme"... – RBerteig – 2009-07-28T23:01:04.600

This is a small and tool that measures file access times on specific areas on my hdd, and actually does give me what I'm asking for. But it does also demand that I manually set up the scan to the parts I want to measure. Thanks, I'm still interested in more advanced features though :) – pavsaund – 2009-07-29T07:21:36.067

I gave this the time to make several runs, and it showed some gain on access times on files. But is time-consuming, and doesn't show any status...pretty clueless as to how far along it has come – pavsaund – 2009-08-03T19:57:49.747

For this particular Q, theres doesn't seem to be other good answers, so I'm accepting this for now. Though I may follow @Michael Kohne's suggestion – pavsaund – 2009-08-03T19:59:41.783

1

Fragmentation only affects you if you have to read large segments of a file, and the fragmentation makes you seek all over the disk for them. If you never read more than a single disk cluster or so in one go, then fragmentation is irrelevant, because you'll have to see anyway due to other activity since your last read.

Program files are one that is likely to affect you, because they tend to be larger, and they tend to read all in one shot. If your program files are fragmented, then that could slow loading of the program, possibly into the human noticeable realm.

If you want to measure the effects of fragmentation, write a program that reads a large file from beginning to end, repeatedly. Do 1000 runs or so to smooth out the noise. Now defragment the file and do it again. See if the average read time goes down.

Michael Kohne

Posted 2009-07-28T20:42:36.813

Reputation: 3 808

this wouldn't take varying results by file size and actual physical placement on the disk into consideration? meaning how far the arm has to move, and how fast it access small / large files along the entire disk (performance-wise)? – pavsaund – 2009-07-29T04:50:44.967

You make a very valid point. Program Files are probably the area on disk where most of the performance hit is taken. After that maybe Windows directory and User profiles. – pavsaund – 2009-07-29T07:16:06.217

I suppose you could do a series of files, each of them fragmented, scattered across the disk. That way, you'd be reading more of the disk surface over the course of the test, and you'd likely see more improvement when you defragmented the drive. – Michael Kohne – 2009-07-29T11:03:13.577

1

The only measure I've ever learned to use is the "Split I/Os" performance counter in the "Physical Disk" category in perfmon. It measures the number of I/O requests per second that had to be split into two or more separate requests because they disk blocks they were looking for were not contiguous.

John Saunders

Posted 2009-07-28T20:42:36.813

Reputation: 486