7

I'm looking for file system with reasonably error corrections, but rugged against forensic after wipe.

Say, encrypted container via loop mounted like ext4 - journal file system. This is good performance and secure in many ways.

Wiping in this scenario is a destroying at least start of crypto-container(journal of ext4). It is fast. Recovering require huge of power: key + password + destroyed crypto-container's prefix.

Is there special file system with encryption and great wipe feature. "Great", I, mean fast and unrecoverable.

E.g. journal at start of file system contain not a addresses of files, but key - map of blocks randomly situated in memory area. Destroying of this key means destroying a sequences of whole file system. File in such file system is lay in un-fragmented memory's sequence. Plus whole of file system encrypted.

Wipe require rewriting the journal(map) of the file system only.

Say, rewrite map and key 30 tries, and no NSA, no NASA, no CIA can recovery wiped file-system.

Sajjad Pourali
  • 934
  • 1
  • 10
  • 22
trankvilezator
  • 229
  • 2
  • 5

4 Answers4

12

The right way to "wipe out" data is to use encryption: never let unencrypted data ever hit the disk. If you do that, then destroying the decryption key is sufficient to destroy the data. The decryption key is small and in many case you can keep it in RAM only (e.g. you type it upon boot, as a "password", which really means "a key that a human remembers"); if the key is in RAM and stays only in RAM, then destroying it is as simple as shutting down power (note, though, that RAM contents may resist loss of power for a few seconds or minutes).

File wiping is what you do when you did not do things the right way. You did write sensitive information on a physical medium, without any encryption. And you would like to "fix" that. There are three main problems with that:

  • File deletion does not overwrite the data on the physical medium; it just marks it as reusable for other files.
  • Sometimes, even writing over the data may not actually destroy it. This is where the so-called "file shredders" intervene, by overwriting the data several times with special patterns which ought to destroy all traces of the past data. However, such shredders are quite specific to the actual disk technology, and, in practice, quite specific to the technology used by disks 12 years ago; on modern disks, the shredders are likely to be useless and unnecessary. Conversely, with SSD, quite a lot of data can remain out of reach of the most thorough shredders. So file shredders are either total overkill or insufficient, with no middle ground.
  • Copies of parts of the file data can be stored in other areas. This may happen with virtual memory because the file data, before being in files, was in RAM. This may also happen with journaling filesystems (depending on the filesystem implementation and configuration).

So any solution based on file "wiping" is likely to be inefficient and incomplete. Anyway, despite this incompleteness, some people have tried to do some sort of automating shredding. This can be done at various level; this one does it by patching the unlink() C library function call (through an LD_PRELOAD trick). Statically linked process would still avoid it, though (but there are very few statically linked applications in a typical Linux installation). I would not recommend it (and the author himself is wary of it); notably, file-shredding on a SSD is not only ineffective (see above), but also noticeably shortens the lifetime of the said SSD.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • thanx, but file-encryption is sometimes a little uncomfortable on standalone servers, that aren't reachable through local consoles; except you know of a method fo the following scenario:you have a virtual master that keeps the hd-keys to start virtual machines in memory. is there a solution to this? is this worth a question? – that guy from over there Sep 20 '13 at 18:15
1

There isn't a [main line] file system available that performs the way you describe/desire, mainly because it isn't very efficient as a file system. Doesn't mean that it is impossible, but that it just isn't often done and hasn't reached a common use scenario. In fact, what you describe is more like an encrypted file system than a quickly wiped one; more on that in a second.

While it isn't as "fast" as I think you're describing, the DBaN software (Darik's Boot and Nuke, dban.org) provides an extremely successful wipe of data from physical magnetic media. I know this from experience performing data recoveries, including with low-level magnetic scanners. Once enough random data has overwritten your target data, the physical magnetic media is useless for recovery purposes.

For "fast" methods you have to utilize either full disk encryption (think TrueCrypt or Microsoft's BitLocker ) or physical disk destruction. The encryption "jumbles" the data so it is unusable without the private key. The destruction eliminates the physical media so there is no recoverable data (normally through degaussing the magnetic surfaces and then physically altering them so they are permanently unusable) Garner makes some good destroyers that will first degauss before drilling the platters. No one is going to be getting data off a drive after that.

Ruscal
  • 811
  • 4
  • 7
0

With a system like rubberhose it would be extremely easy to destroy a file systrm. Just delete the master key for a rubberhose partition and it can no longer be decoded or even detected.

WAR10CK
  • 161
  • 3
0

Are you looking for a way to wipe an entire filesystem, or individual files? Wiping an entire filesystem securely is easy. Use something like LUKS, which uses "anti-forensic stripes", a technique to stretch your password to about 256 KiB, making it more likely that an attempt to destroy the key succeeds. All it takes is a few bytes to be destroyed for it to be impossible to recover, and no one reallocated sector could result in causing portions of the key to be left behind. Destroying the entire filesystem becomes as simple as running cryptsetup erase /dev/mapper/sda_crypt, to instantly and securely destroy everything on that volume.

If you're looking for the ability to do that on a per-file basis so that deletion causes instant and secure destruction of files and metadata, I don't think you'll have much luck. I don't know of any filesystem which is capable of doing that. Filesystems that support encryption can be fully wiped, but individual files are still encrypted with their master key. Some filesystems, however, support the 's' file attribute (set by chattr), which tells the filesystem driver to overwrite all blocks in that file with zeros when it's unlinked. The popular ext4 and btrfs filesystems, however, do not support it. It also does not delete metadata, so some information can still leak be present in the form of the file name, size, inode number, and other metadata attributes.

An ideal filesystem would store a random encryption key in every file inode, and encrypt each block with that key. Upon deletion, the inode could be securely erased, along with the key, rendering all blocks it points to useless. This would provide a simple way for filesystems like ext4 to support the 's' attribute, at the very least.

guest
  • 246
  • 2
  • 3