89

After decades of hearing that "delete" does not really make the data impossible to recover, I have to ask WHY the OS was not corrected long ago to do what it should have been doing all along? What is the big deal? Can't the system just trundle along in the background over-overwriting and whatever else has to happen? Why do we need additional utilities to do what we always thought was happening? What is the motivation of OS developers to NOT correct this problem?

ADDITION: This is not a technology question, because clearly it IS possible to delete things securely, or else there would not be tools available to do it. It is a policy question: If some people feel that it is important and should be part of the OS, why is it not part of the OS? Many things have been added to OSes over the years, and this could certainly be one of them. And it IS an important issue, or there would not have been articles and stories about it for about 3 decades now. What is with the inertia? Just do the right thing.

  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/34360/discussion-on-question-by-no-comprende-why-didnt-oses-securely-delete-files-rig). – Rory Alsop Jan 22 '16 at 09:23
  • Because its expensive and most people dont care about it. Also most information isn't secret or something. – BlueWizard Feb 11 '16 at 12:48

11 Answers11

248

Because of the following reasons:

  • Performance - it takes up resources destroying files. Imagine an application that uses hundreds or thousands of files. It would be a huge operation to securely delete each one.
  • Extra wear and tear on the drives.
  • Sometimes the ability to retrieve a file is a feature of the OS (e.g. Trash, Recycle Bin, Volume Shadow Copy).
  • As noted by Xander, sometimes the physical storage mechanism is abstracted from the OS (e.g. SSDs or network drives).
SilverlightFox
  • 33,408
  • 6
  • 67
  • 178
  • 84
    +1 I would add that the lack of concern of most users about whether their files are deleted securely or not is also a probable factor. If 95% of users clamored for universal secure delete, it would more likely be a feature and the issues you mention would be managed one way or another (possibly by just different user expectations). – Todd Wilcox Jan 15 '16 at 15:42
  • 40
    That performance thing - long before the SSDs when operating systems are 'right at the beginning' - just think of the blazing performance of securely deleting material from that [nine track tape](https://en.wikipedia.org/wiki/9_track_tape). Or for those 140k floppies. Want to securely delete a floppy? move the stuff you don't want to delete to another floppy and destroy the original. Easier than adding another couple routines to Applesoft DOS 3.3. –  Jan 15 '16 at 22:22
  • 30
    Also: when you do want to secure-erase something, you need to erase it from all your backups, too. (You do have backups, right?) Secure-erasing by default would be a total waste of time since users will only take care to secure-erase things from their backups when actually needed. – Peter Cordes Jan 16 '16 at 06:17
  • @MichaelT Good point. You can still do that - just store your stuff on a USB stick. – icc97 Jan 16 '16 at 12:10
  • @PeterCordes if you want a backup you could even have two USB drives. – icc97 Jan 16 '16 at 12:11
  • 21
    The users who are concerned probably also want their files to be encrypted at the beginning, which is a much better alternative in most cases, and *is* the feature of quite a few OSes. – user23013 Jan 16 '16 at 17:55
  • 3
    Secure deletion has no performance impact if the storage is encrypted. You just securely erase the key, which is an O(1) operation instead of O(n). – R.. GitHub STOP HELPING ICE Jan 17 '16 at 01:47
  • 7
    @R.. *Iff* the part of the storage that is to be securely deleted has a separate key. – user Jan 17 '16 at 19:57
  • @MichaelKjörling: Indeed, this design requires a filesystem with per-file keys stored in the inode or equivalent for the file. – R.. GitHub STOP HELPING ICE Jan 17 '16 at 21:26
  • 1
    Also, there are working solutions for non-realtime secure wiping/scrambling of drives for when users get rid of their drives, which I would imagine cover 80% of the use cases for secure deletion. For the other 20% (theft, government seizure, etc.), there is disk encryption. – xdhmoore Jan 18 '16 at 15:56
  • Secure deletion of a stack of floppies took seconds, using a hand-held degausser. – JDługosz Jan 20 '16 at 01:21
  • 4
    Back in the days of DOS (before the advent of Recycle Bins and such), the fact that deletion wasn't permanent was definitely a feature. The `undelete` command saved my ass on more than one occasion. – James_pic Jan 20 '16 at 16:59
  • 1
    @R - in that case the performance overhead is in encrypting all storage (which would need to include disk swap files where performance is very important). And that still leaves the option of recovering the key and extracting the data based on that (full delete to be safe from specialist recovery methods would require repeatedly writing to the storage containing the key - and single over write on magnetic storage could still leave a readable record). – Kickstart Jan 21 '16 at 13:40
  • 1
    @James_pic undelete is child's play compared to king: Novell Netware. They did not have a "recycle bin"--the drive **was** the recycle bin. When a file was deleted it was simply marked as deleted and the space counted as part of the free space on the drive. Files actually only vanished when they were the oldest deleted file on the drive and the OS needed the space. More retention than the recycle bin and you didn't have the issue of the files in the recycle bin taking up drive space. – Loren Pechtel Jan 22 '16 at 04:06
  • I think it (secure erase) should at least be provided as an option, it doesnt even have to be standard. – My1 Jan 25 '16 at 09:08
108

Instead of another "You are wrong because" answer I'd like to take a slightly different approach:

Early computer OS's were written by programmers for programmers. Any one who programs and knows what pointers are understands that "deleting" a pointer doesn't delete the thing its pointing at: they are separate.

That doesn't mean that delete doesn't actually delete. That pointer is gone. Trying to use it after "deleting" it (freeing the memory, rebinding the name) can result in bad things happening.

But history marches on, and now end users who have a different concept of delete (like yourself) are in the picture. They (and you) have expectations that are not unreasonable (whatever else is said in this thread).

But delete will not ever mean (for a computer) what you think it should: there are reasons both technical (detailed quite well in other answers) and social (45 years of inertia).

The modern (and I'm including *nix) OS abstracts a lot of things for you: you no longer need to be a computer expert to own/operate a computer in the same way you no longer need to be a mechanic to own/operate a car. The price you pay is that those abstractions are leaky: there's a fundamental disconnect that can never quite be bridged. A computer "document" isn't really a document, a "desktop" is not a desktop, a "window" is not a window, etc.

Jared Smith
  • 1,978
  • 1
  • 10
  • 12
  • 27
    +1 computers weren't originally designed for laymen. – user541686 Jan 15 '16 at 21:04
  • 3
    +1 because ultimately it does come down to "that's how it was done in the past, and in the past they had different priorities". – Spudley Jan 16 '16 at 07:49
  • 4
    @Insane not a down voter; but if I was going to object over anything it'd be ignoring that secure delete is incompatible with a feature that ordinary users need far more often (and which is pointed out in other answers): Being able to recover something deleted by accident. – Dan Is Fiddling By Firelight Jan 20 '16 at 20:49
  • @DanNeely I'm not ignoring it, the other answers already covered that turf (as well as the performance considerations that make it impractical). I'm just focusing on the part that no one else did: these metaphors we use in computing are flawed and potentially confusing. 'Deleting' a digital file isn't the same thing as 'deleting' a real paper file because there's a dissonance in the terminology: the two cases are not sufficiently analogous. That's not necessarily a criticism (we have to call these things *something*), just an observation. – Jared Smith Jan 21 '16 at 03:07
  • And the secure delete process that a lot of tools perform would not work on SSDs and other flash-based devices. The micro-controllers embedded in the devices decide where the data gets written to even out the utilization/wear of the memory cells and make the device last longer. So a secure delete (e.g. `shred`) would not really destroy the file(s). – code_dredd Jan 22 '16 at 13:34
  • 6
    removing a pointer to something can be a constant time operation. zeroing out deleted data is on the order of the length of the data. this is why many languages don't force the zeroing of allocated data either. it's lazy to not do it, but it's not a small or even constant factor in performance. – Rob Jan 23 '16 at 04:46
  • @Rob Well in fairness, an OS designed to securely erase something could defer secure erasure for a later time, and run it in the background, instead of sucking up all your I/O right then and there as if you decided to copy a huge file. – forest May 03 '16 at 06:17
  • 1
    Security was such an afterthought in the past that old Windows operating systems (or at the very least MS-DOS) would actually fill filesystem sector slack space with **contents from memory** at random rather than zeroing it or writing any other constant. The fact that no one stood up and said "that doesn't sound like a safe idea" just goes to show how much of an afterthought security really _was_. – forest Feb 26 '18 at 12:49
96

It doesn't have to be corrected because it's not a fault.

The pointers to the file are deleted, and the area the file occupied is marked as free space. The drive then overwrites this area in its own time. It's purely there to save wear and tear on the drive. After all, storage devices (especially SSDs) have a limited number to times they can write before they fail. Most users would not appreciate their drive failing after 6 months.

Secure solutions do exist with tools available to securely wipe free space on a hard drive.

forest
  • 64,616
  • 20
  • 206
  • 257
James Hyde
  • 1,071
  • 6
  • 9
  • 25
    +1, but the idea that a regular user *can* hit the wear limit of an SSD is becoming outdated. See [this article](https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead). Those 250 gb SSDs were still trucking at 1 *petabyte* written (4000x the drive's capacity). – Mike Ounsworth Jan 15 '16 at 16:20
  • 15
    This is a good point, however if files were securely deleted every time (involving 1 or more writes over the entire location), then there is at least a higher chance of hard drive failure. – James Hyde Jan 15 '16 at 16:22
  • 5
    Correct, but as @Cruncher said, the notion that SSDs fail earlier than HDDs used to be true, but is quickly becoming a misconception. – Mike Ounsworth Jan 15 '16 at 16:24
  • 15
    I got machine with 256GB SSD OS-Drive in 2013 and a year later it was showing S.M.A.R.T. warnings that it was going to die. Got a replacement, and a year later, new SMART warnings about 3% remaining drive lifteime and I got *another* replacement. Maybe my behavior to keep the SSD at 95% capacity, and keeping 20+GB of programs in RAM every day, and running on always-on web server, and fighting tooth-and-nail against my corporate file-sync software crashing while it failed to sync thousands of files were contributors... It's not impossible to hit the wear limit of modern SSDs. – Carl Walsh Jan 15 '16 at 20:14
  • About whether it would cause wear and tear on the drive... Doesn't the OS need to zero the file contents before creating a new file over the old one? I don't see how "wipe on delete" is really any different than "wipe on create" in terms of wear and tear. It seems like a **performance** optimization to make deleting fast and wiping the contents run in the background, but it seems like the OS could expediently run this background task instead of delaying it. – Carl Walsh Jan 15 '16 at 20:23
  • 9
    @CarlWalsh no, the OS doesn't need to zero anything before creating a new file. Each block of the new file can be (and will be) simply written over some currently unused block, no matter if it is "fresh" or if it was previously a part of some other file. – Peteris Jan 15 '16 at 20:29
  • 6
    @CarlWalsh No; The OS and/or filesystem with copy-on-write only need to give the **illusion** that a freshly-allocated block has a given value. Much like `calloc()` doesn't actually need to allocate and wipe memory until such time as it is actually used, a filesystem could pretend the contents are zero until you make a write to it. – Iwillnotexist Idonotexist Jan 15 '16 at 21:00
  • @MikeOunsworth the only reason that is the case is because of wear levelling. If data had to be immediately securely erased this would multiply writes many times and render wear levelling uneffective. – JamesRyan Jan 20 '16 at 15:04
  • @CarlWalsh: Cheap flash chips are subdivided into large blocks containing 1024 or more pages (what the OS thinks of as sectors); there's no way to alter a non-blank page except by erasing the entire block on which it is contained. Thus, a drive which receives OS "sector write" command will need to find a blank page, write the new data there, and update an index to indicate where the new data can be found. If blank pages get to be in short supply, the drive will find a block which has as many dead (superceded) pages as possible, copy the live pages from that block elsewhere, and then... – supercat Jan 20 '16 at 19:42
  • ...erase the block once everything has been moved elsewhere. If a block has 992 live pages and 32 dead ones, erasing those 32 dead pages will require copying 992 live pages elsewhere; there may not be any reason to do that until the drive is 95% full unless the system decides to reuse the block for purposes of wear leveling. – supercat Jan 20 '16 at 19:50
75

You seems to have a wording problem with the delete term and a wrong expectation about what the functionality should do.

You can check the simple definition on the Merriam-Webster website:

delete: to remove (something, such as words, pictures, or computer files) from a document, recording, computer, etc.

The goal of the delete feature is to remove the selected objects from their current location. They can be moved to another temporary location (the trash bin) to prevent any accidental loss, or the space they occupied might directly be marked as free.

Compare this to the definition of erase for instance:

erase: to remove (something that has been recorded) from a tape (such as a videotape or audiotape) or a computer disk; also : to remove recorded material from (a tape or disk)

: to remove (something written) by rubbing or scraping so that it can no longer be seen

: to remove something written from (a surface)

This one really goes to a lower level, here we are not just removing the file from the folder, we are removing the file's data from the disk, and in this subtlety lies all the difference.

Erase is the word you may most encounter in Windows world, in Unix world you may encounter wipe instead which describes the process from a more technical point-of-view:

wipe: to clean or dry (something) by using a towel, your hand, etc.

: to remove (something) by rubbing

: to move (something) over a surface

Why don't the OSs do this by default? For several reasons:

  • Common home users do not need such feature, all they need is either to simply remove a file from a folder, or in the case of external drives ensure that it gets cleaned completely ("slow" format in layman terms). They rarely need to erase a single specific file.

  • Common home users actually do not want such feature. You will see that the default is not only to not erase the file, but not really delete it either: instead it is moved to the trash can because home users expect their OS to be able to save them in case of a wrong manipulation. You often have to use a specific key combination (Shift + Del option) to delete a file without going through the trash can.

  • Finally, such an option may not be easy or even possible to implement. Technological evolution has added several logical and physical layers around your actual data:

    • Some OSes take regular snapshots of the file system content, in such a case your file might be included in one or several of these snapshots. It may become complicated to ensure proper file erasing in such a condition without endangering the snapshots integrity.

    • As mentioned in other answers and comment, to increase you storage device life there is an abstraction level between the actual storage and how it is seen by the OS. In such case, while the OS can do its best to ensure that the file's data gets deleted at the file system level, it has no way to ensure it has been effectively erased at the storage level.

On one side you have the OS which reasons in terms of files (to simplify) but does not really care about byte storage, on the other side you have the storage device's firmware which reasons in terms of bytes but does not know anything about the files and how the file system is actually set.

So, does this means that everything is lost and we need to wait for some improbable future to bring us file system implemented natively in the storage device's firmware? No, but what you should do depends on your actual concern:

  • If your concern is about the data at your file-system level, then you have third-party software available which will add you a new erase / wipe option to your contextual menus. You need to manually ensure though that copies of this files are not present in any file-system snapshot and backup, since this require a case-per-case decision the OS cannot do it for you (you do not want your OS' eraser wizard to screw your backup or the sake of helping you, do you?).

  • If your concern is about the data physically stored on your storage device, use file-system encryption. This will ensure that potential "leaking" data due to wear leveling and bad block handling will not be exploitable by anyone getting their hands on your storage device.

WhiteWinterWolf
  • 19,082
  • 4
  • 58
  • 104
  • This is the correct answer. – PureW Jan 17 '16 at 15:08
  • 13
    In UNIX, for example, there is no "delete", "erase", or "remove" function, only "unlink". – tudor -Reinstate Monica- Jan 17 '16 at 22:51
  • 1
    @tudor: Indeed, you are raising an important point. While "unlink" is a clearly and non-ambiguously defined function at the file-system level, an erase/wipe function actually goes by overwriting the file content since there is physically no way to just "delete" the data from the disk : it is only possible to overwrite it. Here come different overwriting approaches, using random data, using specific patterns, using one or a certain amount of passes, etc, each having their advantages and disadvantages. That's why it up to the user to choose a third-party software matching his own needs. – WhiteWinterWolf Jan 18 '16 at 09:23
11

For performance reasons. Deleting the file from the index, and declaring that the zone where the file was is now free and can be re-used is far more efficient that erasing all data over that zone.

Benoit Esnard
  • 13,942
  • 7
  • 65
  • 65
  • For SSDs that's turned around. – JDługosz Jan 20 '16 at 01:23
  • @JDługosz: Not really. Flash-based drives need to physically erase all of the information on a block before they can reuse any of the space, but if someone wants to delete a file that uses 64 of the 1024 sectors in a block, marking the 64 sectors as invalid will be much faster than copying the 960 sectors whose content is still needed elsewhere and then erasing the block that contained those 64 sectors. – supercat Jan 20 '16 at 19:36
  • I mean the `trim` command. Blocks are marked as unused at a very primitive level rather than just hanging out until they happen to get reused. Rather than being more efficient to do nothing (not clear) it's more efficient to not recopy blocks that are unused. – JDługosz Jan 21 '16 at 00:47
  • @JDługosz: Given the IMHO unfortunate decision to have all USB drives use sector-based access rather than something closer to NFS, having a mid-level "mark sector obsolete" operation is more efficient than *either* physically erasing it *or* merely having the higher-level system regard it as unused without telling the drive about it. – supercat Jan 23 '16 at 19:42
  • It's an issue with SATA/SAS as well. Not usb, but changing the storage layer away from what the command structure was designed with, and historically having more intelligence in the host and less in the device. – JDługosz Jan 23 '16 at 21:09
7

Not everybody agrees with your definition of what "delete" should mean.

For 99.9% of users, they're not worried about someone sniffing around getting data. They want space to store more torrented Teletubbies episodes. For most people, simply no longer having reserved the space for the file is sufficient.

Then there's the group of individuals you are a part of, that want to have the file erased so that it cannot be recovered. However, did you properly consider the cached copies of the file that may be elsewhere on your harddrive? It can actually be difficult to ensure something is fully deleted. For someone who wants a feature like this, you may have already balanced the security questions, but do you think the 99.9% did?

Finally, consider those who really care about erasing their secrets. The government standard process is to throw the hard drives into a grinder. Why? Did you know that when you erase the file, some of the magnetic properties of the disk keep your data accessible? It can't be accessed by the normal hard drive head, but take the platter out and put it in an expensive magnetic reader, and they can actually pull data off.

If the government wishes to reuse the hard drive, the standard process is a 7 fold wipe, where you write to every sector of the disk 7 times. They usually just use the grinder, because this process is so mechanically intensive that most disks don't even survive the encounter.

What does 'delete' mean anyway?

EDIT: I tried to draw a few points, and let others draw the lines, but it looks like it might help for me to draw the line too. Security always involves compromises. Few people understand security well enough to pick the level of compromise properly. For those who can, you can then divide people into three categories. There's one category of people who are less concerned with security, and will hate a product for pushing more security than they want at the price of usability. There's one category of people who are more concerned with security than you are, and will be frustrated with the lack of security in the product. Finally, there is one category with you, who is happy with that balance.

If everybody has their own definition of what "secure enough" is, getting the secure delete files "right" from the beginning is harder than it sounds.

forest
  • 64,616
  • 20
  • 206
  • 257
Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
  • 1
    I can't downvote on this Q&A site, but... "99.9%" (source?) "because this process is so mechanically intensive...don't even survive the encounter" (source?) "some of the magnetic properties...pull data off." (there is actually a research paper that debunked that a 1-time wipe would not erase enough information to make recovery impossible. Don't have the link here though). – Sumurai8 Jan 16 '16 at 18:15
  • 1
    @sumurai8 I'd be surprised if one in a thousand users are aware enough of the nuances of file storage to be willing to police their os caches of files to be able to gain any security from such a secure erase. For those who are not constantly aware of what their os is doing for them, such a feature would be snake oil. I'd love to see that paper. The government is clearly less trusting than you are, though it's always neat to see new hard evidence. Perhaps the govt rules were for older sloppier hard drives, and new ones exhibit less of an effect – Cort Ammon Jan 17 '16 at 03:43
  • @CortAmmon Governments aren't usually satisfied with "safe enough" in any respect anyway. This doesn't just apply to security, but to safety to a larger extent (e.g. "daily safe doses" of certain substances being set to values several times lower than what's actually harmful). – Cubic Jan 18 '16 at 13:59
  • 3
    Is this the paper you're referring to? The updated epilogue by Guttman himself https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html#Epilogue – user2867314 Jan 19 '16 at 10:56
6

For much the same reason as real-world trash collectors don't shred everything before throwing it into their truck.

There are two primary reasons why people may get rid of objects or information:

  1. They wish to prevent their use by anyone else (or perhaps some specific person or people).

  2. They do not believe that the items would have sufficient value to anyone as to justify the expense of keeping them or of finding someone to whom they might be useful.

Those who are getting rid of things for the first reason should destroy them so as to render them unusable, but for those who are getting rid of things for the second reason, it's preferable to minimize the total remaining expenses associated with the items in question. Because computers generate a lot of information which will become useless after a relatively short time, and because--for most computers--much of the information would be unlikely to cause any damage to anyone if released, there is in many cases little benefit to deliberately destroying information. It is sufficient to ensure that any space which the information had occupied will be available for reuse if and when a need for it arises.

supercat
  • 2,029
  • 10
  • 10
4

Actually you're messing up two different procedures : the deletion and secure wiping. The first one is just a standard FS-dependent operation, a determined and standardized one. The second - the secure wiping - has a lot of approaches, unerase-protection grades, standards, etc. Because of that fact, it is logically correct, that if you need the second category of one, you're responsible to select and activate a mechanism suitable in your very own unique situation. There's no common and standardized in every case/task/environment implementation for secure wiping.

Solomon Ucko
  • 117
  • 1
  • 5
Alexey Vesnin
  • 1,565
  • 1
  • 8
  • 11
2

Why didn't OSes securely delete files right from the beginning?

You mean from the beginning of the deletion, or the beginning of operating systems? Early operating systems were trying to focus on technological advancements like storing stuff on hard drives rather than floppy drives or punch cards. There was no need for the operating system to do anything special to make it impossible to retreive data from floppy drives. (Sorry, just having a bit of fun poking at the reliability of floppy drives there.) Back in those days, the ability to "save" was a big deal. To do file maintenance, like being able to "delete", might be a buried-and-rarely-used menu option. To "securely delete" would seem like nonsense to people who could just throw the disk away, or burn it first. Why use the noisy floppy drive, which had much lower speeds than we are used to today, to wait for a half-minute or longer just so it could overwrite ones with zeros? In the day when nobody understood what a "double click" was, because they didn't even have a mouse and so they never even made a single click, and they thought that "copy and paste" was super advanced technology, they getting people to even understand the purpose would have been quite challenging.

Then, when hard rives came along, the operating systems simply did the straightforward thing, which is to use the same computer code that has been proven reliable for floppy drives.

And it IS an important issue, or there would not have been articles and stories about it for about 3 decades now.

Actually, this may be more of a reflection on Journalism's goals to write things that people may find interesting, or it may be the result of educational efforts. A lot of people simply don't understand that deleting is rather non-permanent, and it is good for people to understand things, so education is a positive thing. This doesn't mean that softare's default behavior should be changed.

If some people feel that it is important and should be part of the OS, why is it not part of the OS?

Because some people feel like it is not important. Although wiping might be much more useful for some people (who are, by all means, free and welcome to wipe (instead of delete) if they wish to do so), different people may have different priorities. I, for one, delete stuff so that I have available disk space instead of used disk space, and speed up actions that affect all files (like copying all of the used disk space on a hard drive), and so I don't need to see filenames of old files, and so that I don't accidentally open an old file when I want a new file. Deleting is an effective way to hide unwanted data (like old data) from software so that software doesn't use that unwanted data. Before I started to back up data more reliably, I actually appreciated the occasional ability to use recovery software that could "undelete" data. As others have expressed, wiping can cause additional slowdowns and additional "wear and tear". Since my computers haven't been in the presence of people who I am concerned may dig through my deleted data in order to try to learn something that I hoped to be secret, wiping has never been a huge priority for me. With all that in mind, in response to your last statement, “Just do the right thing”, I think that deleting has been the “right thing” for my usage scenarios far more frequently than what wiping would be. I think the biggest simple reason, which answers many of your questions, is that identifying which behavior is the “right thing” for most scenarios might not be quite as straightforward as you're giving it credit for.

TOOGAM
  • 372
  • 1
  • 5
  • Hard drives did indeed come after floppies for microcomputers, but not for mainframes and minicomputers: The hard drive came first (1956); the floppy later (~1970). – Wayne Conrad Jan 24 '16 at 13:38
  • I remember using 8 inch floppies on the PDP-11 in High School. Wish I had kept one. Got a few cards here somewhere... It seems interesting that because computers were mainly developed for the government and military initially, they did not include more security. I guess they had physical security (components too heavy to walk off with inside a locked room in a locked building on a secure campus surrounded with barbed wire and protected by armed guards...) so they didn't worry about it. But why with PCs and the issues of security over the last 3 decades was it not included? Oh, well, no point. –  Jan 26 '16 at 21:16
2

And why do they still not do this?

The other answers have answered why it wasn't done in the past (performance, user expectations etc.) so I will just add a note on why it still isn't done now. One problem is that secure deletion is only considered a partial security solution: it will delete one copy of the file, but most modern applications will leave substantial traces behind - history (undo) buffers, backup copies, etc. - from which data can be gathered. There is also the issue of files being accessed before being deleted (e.g. you were going to securely delete that file on your drive, but someone stole the drive before you had a chance). For these reasons, modern operating systems have instead implemented complete encryption for user storage, which makes secure delete unnecessary. With encrypted user data, even though the drive retains the image of a "deleted" file, all traces of that file (and every other file) are hidden to an attacker who does not have the decryption key. Chrome OS, Android, and iOS are all now encrypting user data by default, so, on these systems at least, there is little need to securely delete individual files.

bain
  • 231
  • 1
  • 5
  • Per file encryption keys, even if the keys are then stored in plaintext unless the user provided a key to encrypt them (instead of fill disk encryption) seems like a better solution - you can then simply overwrite the ~256 bits that stores the key and lose access to the file. OpenBSD does this for swap space, but normal file-system implementations are not especially common... – Gert van den Berg Jan 22 '16 at 14:52
  • 3
    @GertvandenBerg: Given the disparate write and erase block sizes of flash media, maintaining per-file file encryption keys and ensuring their reliable destruction is difficult. It would be a lot easier if flash chips offered and documented a means of zeroing out an already-written page, but many of them only allow writes to completely-blank pages because of their error-correcting-code logic, and make no exception for writes which would completely zero out a page. – supercat Jan 23 '16 at 19:46
  • 1
    Thank you for addressing the important half of my question. –  Jan 26 '16 at 21:12
  • @supercat: It gets tricky if there are too many layers of abstraction... If list of keys gets encrypted as well by a user-supplied key, effectively giving full-disk encryption, it becomes a lot harder to read the key data if not destroyed, but less convenient to use. Even without that, the chances of deleted data being recovered is still a lot less than currently, where no attempt to destroy the data is made. – Gert van den Berg Apr 04 '16 at 06:36
1

Other answers are good and have covered the question fully on a technical level. But I think another answer is prompted that takes a different view of the problem, as (IMHO) this question isn't really a technical one (asking why something wasn't made differently is more of a business decision than technical).

The question asks why the OS doesn't really delete the data. The maker of the OS can make it do whatever they like. There's certainly no reason why someone can't build an OS that can delete the data. But why would they? Try flipping the question around, why would someone build an OS that deletes data when it is easy to download a utility that does it for you? Also, it's usually the philosophy that OSs should be minimalist and only contain features necessary to the operation of the computer.

I'm not trying to be harsh with my answer, what I'm saying is the question seems to be asked from the wrong perspective and there's preliminary pieces of knowledge missing.

Celeritas
  • 10,039
  • 22
  • 77
  • 144
  • OSes minimalist? 25 years ago I wrote a set of DOS TSRs to share modems over a Novell network. Around that time, Microsoft was releasing its own network capabilities, and eventually it could share modems and printers on its network methodology. Microsoft has slowly taken over most of the 3rd party products that used to do things like back up, undelete, etc. It is anything but minimalist, and since Windows accounts for 90% of the PCs out there, which are probably 90% of the computers out there, this is really what I am referring to. –  Jan 26 '16 at 21:10
  • @nocomprende it's actually another problem with the question being asked, it speaks as if all OSs are the same, which they are not. Windows is one of the most bloated OSs. – Celeritas Jan 27 '16 at 04:01