In general, block allocation is the most expensive operation in filesystems, so filesystems will try quite hard to avoid it, in particular by reusing blocks when possible. This would mean the following:
When overwriting an existing file, the same blocks are reused. New blocks are allocated when the new file data exceeds that of the overwritten files.
When truncating an existing file, all the blocks are released, thus potentially reusable for other file operations. In that case, the new file may allocate new blocks. There is no guarantee that the new file contents will reallocate the same blocks, and, in particular, the old blocks might have been reallocated to other files in the mean time.
However, it depends a lot on the filesystem internals. Log-structured filesystems perform all writes sequentially, throughout the whole partition, so it is pretty much guaranteed, with such a filesystem, that the new file will not overwrite the blocks from the old file. Journaling filesystems may copy the file contents to an extra structure (the "journal") in addition to the actual permanent storage (depending on whether the journaling extends to the file contents, or just the metadata). Some filesystems also use a "phase tree" which can be viewed as a log-structured filesystem, with a tree instead of a list; for these, overwrites may or may not happen.
An important point to consider is that block allocation strategies do not depend only on the filesystem, but also on the implementation. There is no guarantee that Windows XP and Windows 7, for instance, behave similarly on the same NTFS filesystem. One OS version may find it worthwhile to keep around old blocks to "speed up (re)allocation" while another could employ another strategy. This is all heuristics, tuned and retuned. Thus, one cannot really answer your question about "NTFS"; one would have to talk about "NTFS as implemented in OS foobar, version 42.17, build 3891".
Moreover, all these blocks are what the OS sees; actually physical storage may differ, and move/copy data around. This is typical of wear-levelling algorithms in SSD. Generally speaking, overwriting/shredding files on SSD is not reliable (see this answer for details and pointers). But some data movement can also happen with magnetic disks (in particular when a flaky sector is detected; remapping is done on the fly, and the old sector remains untouched, forever).
This basically means that file shredding does not work well in that it cannot guarantee that the data will be destroyed. You should use file shredding only as an emergency measure when other methods have failed or were erroneously not applied. The correct ways to permanently destroy a file are:
- Wholesale destruction of the complete disk, e.g. by dissolving it in acid.
- Encryption: when the data is encrypted, destroying the key is enough to make the data unrecoverable. While this does not completely solves the issue (you still have to destroy a data element), it makes it much easier (a key is small: it is much easier to destroy 128 bits than to destroy 128 gigabytes).
Secure erasing, when implemented properly by the disk, works with the encryption trick.