From my understanding ext4 tries to write data to the largest continuous gap of open inodes where no data currently resides. This severely reduces latency when those files have to be read as for the, most part, the whole content of an individual file would mostly lie on a single continuous track so the drives head would have less seeking do do when finding every block containing the data that makes up that one file.
It (ext4) can still become fragmented but much less so and not necessarily in a way that affects read/write performance severely as with NTFS. On NTFS, data is written to the first open blocks in the path of the head.
So wherever the head lies and there is open blocks it writes as much of that data as can fit then writes wherever it lands elsewhere on the disk when the head has to move, say, to another part of the disk to access a different file that has to be opened in a program you just loaded while that other file was being still being written.
This means that if the file is large it is likely to be spread out in blocks separated from each other on separate tracks and is why defragmenting is needed often for NTFS.
Also why servers generally don't use it as there is heavier I/O going on with a server where data is constantly being written and read from disk 24/7.
Also I'm not sure but if chkdsk
checks the integrity of each file (which I believe both it and fsck
do) then it would also be slower in comparison due to what I just described about fragmenting on NTFS.
I haven't had Windows chkdsk an NTFS volume on bootup since 2008 R2 was released. Even in a CSV cluster with multiple nodes accessing the same NTFS volume locking tens of thousands of Lucene index files. It's quite impressive. – Brain2000 – 2018-10-27T20:38:34.493