Why did defragmenting C drive increase my free disk space by 10 GB?

27

1

I used Defraggler to defragment free space on my 100 GB C:\ drive which was 85 GB full. After defragmentation, the drive showed only 75 GB full. How did 10 GB  of free space magically appear? Did I lose any data?

I did a disk cleanup before defragmenting and my trash was only about 11 MB, so it can't be because of cleaning the temporary files. Note that I defragmented "free space" which means that it should have rearranged the empty blocks to be contiguous.

goweon

Posted 2011-07-07T19:35:03.370

Reputation: 1 390

1

Probably an interaction with shadow copy: see for instance ( http://www.piriform.com/docs/defraggler/technical-information/windows-vista-and-volume-shadow-copies )

– horatio – 2011-12-30T16:14:27.060

2Also, Defraggler has an option to empty the recycle bin before defragmenting. – oKtosiTe – 2013-03-31T12:13:10.083

Answers

14

What probably happened is that the defrag operation forced Windows to throw out some system restore snapshots. It'd be a pathological case of fragmentation to cause metadata overhead to be a full 10% of your drive space on top of what Windows normally uses. even then, I'm not sure it's possible.

I don't see anything in Defraggler's version history or documentation that indicates that it's able to correctly defragment files to prevent the purging of shadow copies. In fact, this thread from Defraggler's support forum indicates that they know that it's happening (there's a post from a board admin labeled "Official Piriform Bug Fixer" in thread) but don't indicate whether or not they're going to fix it.

Shadow copies may be lost when you defragment a volume: The reason this happens is that by default, VSS operates with 16 KB clusters by default, while most NTFS volumes are formatted with 4 KB clusters. So if a defrag operation is moving data that isn't a multiple of a 16 KB cluster (or the "distance" it's moved isn't a multiple of 16 KB), then VSS will track it as a change and might purge all your snapshots.

MSDN: Defragmenting Files:

When possible, move data in blocks aligned relative to each other in 16-kilobyte (KB) increments. This reduces copy-on-write overhead when shadow copies are enabled, because shadow copy space is increased and performance is reduced when the following conditions occur:

  • The move request block size is less than or equal to 16 KB.
  • The move delta is not in increments of 16 KB.

Vista's built in defrag doesn't do this:

One change that’s not obvious to users is our shadow copy optimization during defragmentation. Defrag has special heuristics to move file blocks in a way that will minimize the copy-on-write activity and shadow copy storage area consumption. Without this optimization, the defragmentation process would accelerate the deletion of older shadow copies.

afrazier

Posted 2011-07-07T19:35:03.370

Reputation: 21 316

I don't know about the snapshots part of this answer, but I am not convinced that the defrag freed the space. Could the defrag have emptied your Recycle Bin? On a filesystem that doesn't combine small files into a single cluster, I can't see how defragmenting would result in such a space gain. I could imagine you had 10G of space available in such small blocks that Windows wasn't counting it, but I'd not heard of that behavior before. – Slartibartfast – 2011-07-08T02:26:44.873

@Slartibartfast: It's a problem if the app isn't written correctly. I'll update my answer with more evidence, now that I'm not on my phone. :-) – afrazier – 2011-07-08T02:33:14.577

@afrazier - although I think ThouArtNotDoc's updated answer is a better general answer. I have to agree that shadow copies is probably playing a role in the OP's situation. I found this as well: "If you plan to defragment volumes on which shadow copies are enabled, it is recommended that you use a cluster (or allocation unit) size of 16 KB or larger. If you do not, the number of changes caused by the defragmentation process can cause shadow copies to be deleted faster than expected." - MS: "Designing a Shadow Copy Strategy" http://technet.microsoft.com/en-us/library/cc728305(WS.10).aspx

– Ƭᴇcʜιᴇ007 – 2011-07-08T13:49:10.087

@afrazier: Confirmed. All previous system restore points are gone. In the config, I have 12% set as the maximum space allowed for shadow copies, so 10 GB is not unreasonable. On the other hand, I just installed a 9 GB game, which could have simply overwrote the previous restore points. – goweon – 2011-07-08T16:24:00.120

@firebat: The space used by System Restore is for tracking changes to existing (snapshotted) files, not new ones. Installing a new game wouldn't impact system restore space, but a large patch could. – afrazier – 2011-07-08T18:09:14.307

23

Each fragment has to be tracked somewhere. That takes storage space (within the File system's plumbing, not stuff you're meant to directly access).

An example: Suppose you have a single file with 1000 fragments. Thus your file is stored through a collection of random blocks.. Rather than in a single continuous block. This means the fragmented file requires 1000x more storage space within the file system's plumbing, if only for storing addresses for each fragment. The file system plumbing keeps little dictionaries/databases/maps/tables/list to the location of each fragment of the file. So, for the file system plumbing, storing a list of a single fragment pointer doesn't need much space, compared to a list of 1000 fragment pointers..

But hey, maybe I'm wrong...

Edit: Supporting information from here:

When a non-resident data stream is too much fragmented, so that its effective allocation map cannot fit entirely within the MFT record, the allocation map may be also stored as an non-resident stream, with just a small resident stream containing the indirect allocation map to the effective non-resident allocation map of the non-resident data stream.

Translation: If you have heavy fragmentation, the general case assumptions of the file system plumbing won't apply. As such, the FS must take steps to accommodate the fragmentation and end up costing extra storage space, just to manage the fragments. Exactly my guess from the first place.


Edit: Given the above, it still seems like 10GB being lost just to file fragmentation is crazy. I'm betting that while defragging, you had some common file system corruption that was corrected automatically. I'm thinking not only did you have massive fragmentation, but also partially deleted files taking up storage space. It would have been nice to see a scandisk log from that defrag (or a run of scandisk prior to defrag)

James T Snell

Posted 2011-07-07T19:35:03.370

Reputation: 5 726

3PS - that must have been some hardcore fragmentation. – James T Snell – 2011-07-07T21:14:22.473

I don't know if it is a valid reason for such an increase in storage consumption after defragmentation. But I've noticed this many times, If you have a system restore point which is quite old (it generally doesn't happen if you regularly run disk cleanup utility), and then you defragment after lot of data is gathered on C drive, the memory consumption increases out from nowhere, but if you delete that restore point, it cleans up that occupied storage. Well it technically, it may not make any sense, but happened to me many times, since overtime, restore points take up memory. – Kushal – 2011-07-10T16:10:38.760

Dunno, 10G seems like a lot, but IME, very full disks tend to fragment much much more quickly than less fragmented disks. If he was at > 85% before (since he probs meant 85GiB but 100GB disk), and possibly fuller in the meantime, he could have had spectacularly high fragmentation, and attendant horrid performance. – Bernd Haug – 2011-07-13T08:41:47.933

3Unless the blocksize on that volume is obscenely high, I personally can't believe in 10GB of space being freed up just by not tracking fragments. With a blocksize of 4096k and assuming each fragment requires a full block just to track (which is false), that would need 2.5 million fragments formerly tracked but not any more to free up that much space. – CarlF – 2011-07-15T19:07:16.677

I agree with CarlF. 10GB on a 100GB drive is 10% of the available blocks, all used to track where blocks are. Well, at a minimum, each block is 512 bytes (1 sector). I haven't checked recently, but I'd expect a typical block to be a cluster of 2, 4 or maybe 8 sectors. In comparison, the worst storage requirement I can imagine for a next-fragment pointer is 16 bytes. IOW, even with extreme assumptions (single-sector clustors etc), the worst percentage of drive space I can imagine being used for this overhead is a little over 3% in total. – Steve314 – 2011-10-04T00:10:21.293

@Steve314 that seems like reasonable thinking to me. Though I wonder if each pointer is taking a relatively large storage block.. You know when you format your file system you set block size to something 4KB, etc..? Maybe NTFS's internal plumbing has to use those same block sizes for each fragment? That seems crazy crazy crazy to me, but I'm not sure how else to explain the question asker's observations.. Thoughts? – James T Snell – 2011-10-04T17:23:16.123

10 gb would be enough to list around 625 MILLION fragments, so you would need 625,000 different files that each had 1,000 fragments. Something else must have happened, probably clearing out the web browser cache or something. – psusi – 2011-10-04T19:23:03.867

I guess the question then is if the asker only defragged and whether or not defrag auto-clears browser cache? I'm guessing not. Maybe it's more like he had partially deleted files that would have been cleaned up by a scandisk, which was run as a part of the defrag? – James T Snell – 2011-10-04T19:41:38.673

@psusi - do you like my edit to the answer above? – James T Snell – 2011-10-04T19:59:21.483

1@Doc - a pointer won't take a whole block. If (as is likely) each allocation block is multiple sectors, you need fewer pointers as there are fewer blocks to track for your data - so the maximum possible overhead to track all the fragments will be much less than 3%. I don't know the exact details of NTFS, but with the (relatively inefficient) FAT filing systems, you still couldn't get 10% overhead for the File Allocation Table (those fragment-tracking pointers) that FAT was named after. – Steve314 – 2011-10-05T04:02:08.220

Personally, I think afrazier is right. After that, my next best guess would be that the defrag does some checking as well, and frees some lost blocks as it goes - though with such bad corruption that you have 10% in lost blocks, you'd think there'd be other serious problems. Also, if you have a defrag tool that tries to fix corruption as it goes, that's a bad thing - if it spots corruption, it should error out so you get the chance to do backups and separate repairs. Any attempt at repairing corruption may sometimes cause worse problems. – Steve314 – 2011-10-05T04:04:02.087

The answer below about volume shadow copies seems to be a rather likely explanation. FYI, NTFS stores 64 bit start:length pairs for file fragments, so they use at most 16 bytes, but are often encoded in less than that. – psusi – 2011-10-05T13:46:21.007

Defragging would not cause the MFT to shrink. Apparently it only ever grows! So while defragging the files would cause the MFT to need less space to track them, it would not actually free up more space since the MFT’s size would remain unchanged (and even if it could shrink, 10GB seems highly unlikely). – Synetech – 2012-05-19T01:47:49.957