When overwriting a file, how does this effect the filesystem NTFS?

0

Say a I have a secondary hard drive that has been defragged and optimized so that the data is a nice neat continuous block with no missing spaces/clusters. Like a solid brick wall.

If I copy a file over which has the same name and overwrites the one on the secondary drive, is the data stored in exactly the same space/cluster?

What if the file is smaller, will this create a new gap in the clusters?

What if the file is larger, will part of the data fill up the cluster and then use one at the end of the drive that's free, making it fragmented?

Rick

Posted 2015-02-06T21:07:05.050

Reputation: 1

Answers

1

Say a I have a secondary hard drive that has been defragged and optimized so that the data is a nice neat continuous block with no missing spaces/clusters. Like a solid brick wall.

--> Imagine this picture like this for file system storage:

[(data1)(data1)(data2)(data2)(data3)(data3)(data4)(data4)(data5)(data6)(empty)(data7)]

Each one of the items in (..) is a block size, they can be 4k, 8k, 16k, 32k blocks, .. whatever may suit your needs for storage.

If I copy a file over which has the same name and overwrites the one on the secondary drive, is the data stored in exactly the same space/cluster?

--> Yes, yet though you really don't know.

It depends on the size of the file being copied over. If the file was modified to add more data in the file, or remove data in the file, it can use more data blocks, or less data blocks. For a 4k per block file system file which has been used to store a file, that file needs to take up space in 4k clusters. If the data was only using 5k, it would require two 4k clusters to store the file.

What if the file is smaller, will this create a new gap in the clusters?

If the file was smaller, this gap is really just unused data space in the cluster, based on the nearest blocks needed to store the new file.

What if the file is larger, will part of the data fill up the cluster and then use one at the end of the drive that's free, making it fragmented?

--> Generally yes, though today's for file systems these are issues which are not as noticeable.

Unless you have a process reading/writing continuously on a block file system .. and for this instance, I would just create a ram drive just for the process to work in.

In summary, I would encourage you to focus on many other factors for performance, such as using prelink and preload ... and changing the 'vm.swappiness = 10' in the /etc/sysctl.conf file if you have a debian system.

Cheers Rick :)

Neosimago

Posted 2015-02-06T21:07:05.050

Reputation: 39