Somewhere inside a traditional hard disk is a spinning metal platter where the individual bits and bytes are actually encoded. As data is added to the platter, the disk controller stores it on the outside of the disk first. As new data is added space is used moving towards the inside of the disk last.
With this in mind, there are two effects that cause disk performance to decrease as the disk fills up: Seek Times and Rotational Velocity.
Seek Times
To access data, a traditional hard disk must physically move a read/write head into the correct position. This takes time, called the "seek time". Manufacturers publish the seek times for their disks, and it's typically just a few milliseconds. That may not sound like much, but to a computer it's an eternity. If you have to read or write to a lot of different disk locations to complete a task (which is common), those seek times to can add up to noticeable delay or latency.
A drive that is almost empty will have most of it's data in or near the same position, typically at the outer edge near the rest position of the read/write head. This reduces the need to seek across the disk, greatly reducing the time spent seeking. A drive that is almost full will not only need to seek across the disk more often and with larger/longer seek movements, but may have trouble keeping related data in the same sector, further increasing disk seeks. This is called fragmented data.
Freeing disk space can improve seek times by allowing the defragmentation service not only to more quickly clean up fragmented files, but also to move files towards the outside of the disk, so that the average seek time is shorter.
Rotational Velocity
Hard drives spin at a fixed rate (typically 5400rpm or 7200rpm for your computer, and 10000rpm or even 15000 rpm on a server). It also takes a fixed amount of space on the drive (more or less) to store a single bit. For a disk spinning at a fixed rotation rate, the outside of the disk will have a faster linear rate than the inside of the disk. This means bits near the outer edge of the disk move past the read head at a faster rate than bits near the center of the disk, and thus the read/write head can read or write bits faster near the outer edge of the disk than the inner.
A drive that is almost empty will spend most of it's time accessing bits near the faster outer edge of disc. A drive that is almost full will spend more time accessing bits near the slower inner portion of the disc.
Again, emptying disk space can make the computer faster by allowing the defrag service to move data towards the outside of the disk, where reads and writes are faster.
Sometimes a disc will actually move too fast for the read head, and this effect is reduced because sectors near the outer edge will be staggered... written out of order so that the read head can keep up. But overall this holds.
Both of these effects come down to a disk controller grouping data together in the faster part of the disk first, and not using the slower parts of the disk until it has to. As the disk fills up, more and more time is spent in the slower part of the disk.
The effects also apply to new drives. All else being equal, a new 1TB drive is faster than a new 200GB drive, because the 1TB is storing bits closer together and won't fill to the inner tracks as fast. However, attempting to use this to inform purchasing decisions is rarely helpful, as manufactures may use multiple platters to reach the 1TB size, smaller platters to limit a 1TB system to 200GB, software/disk controller restrictions to limit a 1TB platter to only 200GB of space, or sell a drive with partially completed/flawed platters from a 1TB drive with lots of bad sectors as a 200GB drive.
Other Factors
It's worth noting here that the above effects are fairly small. Computer hardware engineers spend a lot of time working on how to minimize these issues, and things like hard drive buffers, Superfetch caching, and other systems all work to minimize the problem. On a healthy system with plenty of free space, you're not likely to even notice. Additionally, SSDs have completely different performance characteristics. However, the effects do exist, and a computer does legitimately get slower as the drive fills up. On an unhealthy system, where disk space is very low, these effects can create a disk thrashing situation, where the disk is constantly seeking back and forth across fragmented data, and freeing up disk space can fix this, resulting in more dramatic and noticeable improvements.
Additionally, adding data to the disk means that certain other operations, like indexing or AV scans and defragmentation processes are just doing more work in the background, even if it's doing it at or near the same speed as before.
Finally, disk performance is huge indicator of overall PC performance these days... an even larger indicator than CPU speed. Even a small drop in disk throughput will very often equate to a real perceived overall drop in PC performance. This is especially true as hard disk performance hasn't really kept pace with CPU and memory improvements; the 7200 RPM disk has been the desktop standard for over a decade now. More than ever, that traditional spinning disk is the bottleneck in your computer.
37it doesn't really speed up PCs, it only reduces the chances of file fragmentations which make HDDs slower. This is one of the greatest PC myths that everyone repeats. To find bootlenecks on the PC, trace it with xperf/WPA. – magicandre1981 – 2015-04-20T04:25:21.570
9FWIW it
speeds up the experience of using a PC
. – edthethird – 2015-04-20T15:06:02.6404@magicandre1981: There is a tiny gem of truth. The more things in each folder, the slower file traversal is, which impacts anything using a filepath, which is... everything. But that's tiny. – Mooing Duck – 2015-04-20T18:29:40.487
4
@MooingDuck While true, that's related to the number of files in a folder, not to the size of the files or the amount of space remaining on the drive. That effect is not related to remaining disk space. The effect also is limited in scope to the folder itself, it won't "slow down" the whole computer. Some filesystems, ext3/4 for example, use hashed directory trees to make lookups (including subfolder access) fast, thus limiting the scope of the effect even more, e.g. only when listing contents of a directory.
– Jason C – 2015-04-20T18:57:43.0974What videos were you watching exactly? – Loko – 2015-04-21T12:08:19.340
3
@Loko I watched this one and another noe on the same theme I can't find anymore. I also watched this one and a bunch of others that are more engineer oriented.
– Remi.b – 2015-04-21T13:26:52.497