Your concept is a good one, but there's actually a more efficient method called file shredding. Instead of just marking the sectors as free, shredders first overwrite those sectors with data. This data may be a sequence of zeros, or random values. The goal is to prevent recovery by making the data on the disk unreadable.
On Linux, you can use shred:
shred -u <file_name>
By default, this overwrites the sectors 3 times. You can alter this count using the -n
switch:
shred -n <count> -u <file_name>
However, even a single pass will do the job for software-based recovery.
If you're worried about attackers with a lot of patience and a lot of money, you might want to take a look at data recovery techniques that can retrieve data even after it's been overwritten on the disk. More complex analysis, such as magnetic force microscopy, might be able to recover data that has since been overwritten. However, a large number of experts (including the NSA) consider this to be near impossible with modern drives.
Despite this, there are standards that attempt to make analysis of the disk surface more difficult. These aim to apply certain bit patterns such that any latent information is degraded beyond recovery. These patterns are designed based on the physical construction of magnetic disks. For example, the pattern "0xF6, 0x00, 0xFF, random, 0x00, 0xFF, random" (i.e. one whole pass of each) is designed to eliminate data traces from standard magnetic platters. Alternative methods use different types of patterns for different devices, with some using dozens of passes. However, this is mainly thought to be unnecessary, even for classified data.
Further reading: