0

We all know that files which are deleted from a modern system usually don't get erased immediately from memory, which is a security risk in itself.

However, this question focuses on a different topic: suppose that a file on the system, which is plaintext, stores for a short amount of while sensitive information (e.g. crypto keys in it). Then, the contents of the file are modified and the sensitive information removed from it.

Assuming no program read the file while it contained that information and assuming no backups of the file were made by the text editor while it was being changed (since it's simply the "nano" unix utility), are there any ways that the sensitive information can be extracted from the storage medium after the file was modified by removing the keys from it? How would this information be extracted and where is it stored?

Assume the partition where the file is held is, like I said, in plaintext and the storage medium is an SSD. As a threat model, assume the attacker has physical access to that SSD after the modification of the file was made but the computer itself is turned off (so no extraction from RAM or caches or other suspicious stuff).

If it's relevant, also assume the partition is ext4 (although it would be interesting to hear this answer for other systems as well).

John
  • 3
  • 2
  • How is the file modified if "no program read the file"? – Tom K. Feb 19 '18 at 13:45
  • Sorry, meant to say no program read the file before it was modified to not contain the keys. And even then, the only program to have read it is the editor itself – John Feb 19 '18 at 13:50

2 Answers2

0

Not an expert but I think that unless trim is turned on and given enough time to do its work:

  • The average user can not recover them.

  • An expert user has chances to recover them

  • A "pro" with capacity to desolder the memory chips and read them directly can very likely read them unless significant efforts are made to ensure the sector containing the data is physically erased.

  • Could you provide any sources for your claims? – Tom K. Feb 19 '18 at 13:52
  • This is just what I think, no claim, let me explain: An SSD controller does not erase sectors immediately. It probably waits for a physical sector to be below a certain number of used pages before copying recycling those used pages to a new sector and queuing the now unused sector to be erased. Then the queue takes some extra time to erase it. Most users (including me) will not know how to read pages marked for deletion but with the right tools and knowledge I'm sure you can. Failing this, if you can read the chip sector by sector you will surely find the information if it's there. – Toni Homedes i Saun Feb 19 '18 at 13:57
0

On ext4 there is a chance that your data (your secret keys) can be recovered irrespective of the underlying hardware, depending on how you mount it. (The following explanations are valid for many other journaling filesystems, too, and you always have a problem with copy-on-write filesystems such as btrfs and others that are snapshot-capable).

The curse of journaling/copy-on-write file systems

This has to do with how modern (journaling) filesystems make sure that if you have a power loss in the middle of writing to the filesystem, the filesystem will still come up in a sane state. With small disks, we had to run programs such as fsck to fix a filesystem after such a crash. With large disks, running fsck would take too long, so journaling filesystems are a much better option.

However, they achieve their recovery magic by keeping a journal of operations (hence the name) they can either replay in full or discard if there is no commit on record (e.g. if your computer died in the middle of writing something to disk).

Now depending on how much your filesystem writes into the journal (just the file metadata operations or also the associated data), you might have a problem.

One way to be able to recover to a clean state after a writing data to a file went wrong / got interrupted in the middle) is to never write data to the same location - so if you overwrite a file's data, the new contents are not stored in the same disk block as the original file contents. A new disk block is allocated instead. This is called copy-on-write (COW for short). This way, the old file contents can be easily recovered when it turns out the write operation didn't finish correctly.

However, this also means that your secret keys still reside on disk after being "overwritten" with new data and can be found by a simple grep operation on the partition device, for example:

grep "my-secret" /dev/sda1 

or maybe

strings /dev/sda1 | grep "my-secret"

So you don't need to be a rocket scientist to recover previously overwritten data when the file system is mounted with the data journaling option.

data journal on ext3 and ext4

ext3 and ext4 allow you to turn data journaling on (data=journal option) or off (which, I think, is the default). I think that without data journaling (or at least with data=writeback), your files should get overwritten in place (that's what man ext4 suggests, anyway). But I'm not 100% sure, so you better test it.

Problems caused by solid-state disks and wear-leveling algorithms

SSD's and flash drives pose more problems - wear leveling algorithms make it pretty much impossible to know where data physically ends up.

SSD's trim option may help you because it allows the SSD controller to mark blocks as deleted, so the controller can schedule them to be overwritten at an earlier time. Still, you have no guarantees here, either. Cheap SSDs might not even implement it and lie about it to the operating system.

Finally, even rotational magnetic disks pose problems. Most of these disks which are still operational have smart controllers which can substitute a working sector for a defective sector on the fly. This may happen any time, so if you're very unlucky, the sector that contains your secret might end up marked as defective and transparently swapped out. Usually you can't access these defective sectors (they're defective, after all), but that doesn't mean experts with the necessary diagnostic hardware can't still read these defective sectors. However, I wouldn't worry much about this scenario, it's just something to keep in mind.

What can you do?

So your only safe bet is to encrypt the filesystem where you store sensitive information. Then you can make sure it's really inaccessible by deleting the encryption key. This is the only solution that works no matter what kind of filesystem and hardware you use.

Another way to actually overwrite data is to create a file that takes up all the free space of a partition:

dd if=/dev/zero of=stupidly_big_file 

However, this may take a very long time and it isn't quite foolproof.

Out of Band
  • 9,150
  • 1
  • 21
  • 30
  • A few notes. First, it may be interesting to know that, even without `data=journal`, small files may still have their _data_ kept in the journal due to the `inline_data` feature keeping small files in the inode itself. Second, you can use `hdparm` to read defective sectors, no hardware needed. Usually defective means they are failing (a threshold or timeout passed for read/write attempts), not totally unreadable. Lastly, fully overwriting an SSD does not work, as overprovisioning space guarantees that a sizable chunk of data will not be overwritten. 100% full on a SSD is not really 100% full. – forest Feb 20 '18 at 03:08
  • Good points. I didn't know about inline_data in ext and forgot about hdparm. Though I guess that only helps with sectors that are still readable by the HD hardware; I know there are shops that recover data on truly damaged HDs, and these do need special hardware. In fact I think they open up the disks and take the spindles apart... then again, I don't think thats what we need to worry about :-) – Out of Band Feb 20 '18 at 13:25