8

I would prefer that no one, even me, could encrypt my files. I have no use for it, and don't want it.

Is there a way to permanently disable any sort of encryption at the OS level?

If not, is this a possible improvement that a future file system could incorporate? Or is it fundamentally impossible to prevent?

Thomas Roy
  • 359
  • 1
  • 2
  • 4
  • 65
    Why would you prefer this? – JonH May 14 '17 at 20:59
  • 245
    Is this really *"How do I prevent ransomware?"* – Xen2050 May 14 '17 at 22:28
  • 32
    Even if such a thing were possible, which it isn't, it wouldn't protect your files against ransomware. The ransomware hackers would just irrecoverably delete your files instead, and then _say_ that they were encrypted and could be recovered if you paid the ransom. Some desperate people would pay, they just wouldn't actually get their files back. – Mike Scott May 15 '17 at 11:44
  • 41
    Seems like a case of an XY problem (https://meta.stackexchange.com/a/66378). Be that as it may: blocking encryption won't stop ransomware in the first place, and it's not really possible to block all encryption *anyway* on a general-purpose computing device. – Soron May 15 '17 at 12:28
  • 9
    You might not have any personal use for it, but your OS and many programs you run do. – schroeder May 15 '17 at 13:55
  • 24
    It'd have to be a filesystem that doesn't support content. – user2357112 May 15 '17 at 17:30
  • 64
    You are effectively asking if you can buy a diary that prevents you from writing entries in French. A file system, just like a diary, holds precisely what you write in it. – Tavian Barnes May 15 '17 at 18:21
  • 6
    I suppose if the goal is rendering ransomware ineffective by your choice of filesystem, you want one that doesn't support _deletion_ (nor overwriting), it doesn't matter whether it supports encryption. – David Z May 16 '17 at 01:20
  • 9
    Thanks everyone. For some reason I was thinking of encryption as a mechanic of the partition/FS/OS/drivers. I now realize that ransomware simply encrypts the bytes of the file and writes them to disk. This is simpler than what I was imagining. – Thomas Roy May 16 '17 at 04:07
  • use btrfs + snapper – anna328p May 16 '17 at 20:57
  • 1
    btrfs read-only snapshots are one approach to de-fang ransomware. Once the snapshot is created only the system root user can delete the snapshot or make it writeable to encrypt its content. The overhead of frequent snapshots is quite small. zfs has similar facilities. – nigel222 May 17 '17 at 13:38
  • 1
    Log-structured filesystems are another approach. The basic idea is that once data is written it is not overwriteable within a defined time window, so you can wind the filesystem back to how it was at any point in time in the window, not just to a snapshot. A separate process recovers space representing data more aged than the window. The biggest problem with such filesystems is that repeatedly overwriting the same file will fill the filesystem, and recovery from this situation can be a serious problem. But perhaps as disks get ever larger, this idea will finally be one whose time has come? – nigel222 May 17 '17 at 13:47
  • @nigel222 As disks get larger, people will think that having their movies in Ultra-Ultra-Ultra-Ultra HD is even better idea :P And if you allow any way to purge the logs, that would be what the ransomware would use... – Luaan May 18 '17 at 09:04

9 Answers9

103

No, that's impossible, unless you change the definition of a file.

A file is arbitrary data. Arbitrary data can be encrypted data.

Even if we only allow structured data, structured data can - if we assume no space constraints - be abused to store all arbitrary data* (citation needed). Which brings us to the starting point.


You can have partial success, if we introduce restrictions. An example would be if you don't want files to be encrypted after writing them, you can use a write once (or even write only) system. Or if you want to fight ransomware attacks, you could have a filesystem that preserves original copies of modified files for a certain amount of time.


*For example a restrictive text format that only allows the words "Fizz" and "Buzz" can represent all binary data by replacing 0 with "Fizz" and 1 with "Buzz".

Peter
  • 3,620
  • 3
  • 13
  • 24
  • 5
    sorry folks, this got silly – schroeder May 16 '17 at 06:31
  • How about a filesystem that understands many plausible format extensions, and validates saved data against the format, refusing to write the file if it doesn't check out?  E.G. If you open a `.jpeg` in a hex editor and change a few bytes _(in an attempt to create kewl glitch art)_, when you save the OS gives you an error **“file could not be saved; data could not be validates against JPEG/JFIF spec”**.  Of course, this type of filesystem would put a huge burden on app devs to only use supported formats and to apply for their formats to be added to the OS… but theoretically it could work. – Slipp D. Thompson May 17 '17 at 02:10
  • 6
    @SlippD.Thompson Not only would you need to have such file type specific validation, but you would also have to validate that each .jpeg is an actual picture, rather than a bunch of pixels that can be interpreted as encrypted data. You also have to ensure the images do not contain any stenographic content. – Cort Ammon May 17 '17 at 04:02
  • @CortAmmon Sure, if you want to have a fully anti-encryption / obscured-data filesystem. I was shooting more for an anti-encrypt-existing-content-in-place filesystem. – Slipp D. Thompson May 17 '17 at 06:11
  • @Peter I know this is not a helpful comment, but your statement is genius. I also believe structured data with arbitrary length could be used to store every file. – EralpB May 17 '17 at 07:01
  • 3
    @Slipp That could offer some protection against random changes. Ransomware attacks are deliberate, targeted changes. Knowledge of such verification makes it easy if not trivial to bypass. Keep in mind the number of file types successful ransomware needs to affect are small (I'd assume docx and jp(e)g alone are enough to get 95% of private "customers"). – Peter May 17 '17 at 07:41
  • @Peter Well yeah, every improvement in safety & security will eventually be broken— there is no such thing as a perfect system. The safety of file format verification would be against attacks that rely on _a file being arbitrary data_— attacks without knowledge of this verification process. And of course not all data can be verified and so if an attack was designed to, say, cipher all the human-readable text in a `.doc`, that would still work. – Slipp D. Thompson May 17 '17 at 13:29
  • 4
    @SlippD.Thompson So i want to encrypt your jpeg. I read it, run it through an encryption algorithm and convert the output to octal or hex. I then save it as a txt file. It's perfectly valid for a txt file to contain the characters "01234567". My code then deletes your original file and puts up the random demand. File format verification does exactly nothing. – Murphy May 17 '17 at 16:21
  • @Murphy That example includes deleting the original file, so all bets are off. A magical encryption-proof filesystem serves no purpose if outright deleting files is your strategy. It should be noted that filesystem data recovery is much easier on outright deleted files _(modifying files can be atomic or nonatomic operations, but copying content into new files is always atomic on most filesystems)_, and retrieving backups of deleted files is much easier than files that have changed— since given time, the changed files can replace the legitimate backed up version(s). – Slipp D. Thompson May 17 '17 at 16:28
  • 2
    If you make the file system suddenly understand semantics of file types and to perform validation, now you have more problems: 1. Strict validation might prevent writing partial files. 2. You have significantly increased your attack surface: there is now potential for a malformed file to target bugs in the file system's format-parsing code. – jamesdlin May 18 '17 at 04:58
  • @SlippD.Thompson Existing ransomeware routinely makes an encrypted copy and then deletes the old file in an unrecoverable way. If files can't be corrupted (say by making pictures all black, text files all zeros etc) then after you delete the old files simply create a single large text file as large as the unallocated space in the file system that's a string of random text characters. No more file recovering for you unless you have a seperate backup system. if you do in fact have a separate backup system then you don't need your massively-complex all-knowing filesystem. – Murphy May 18 '17 at 10:19
94

Read-only file systems can by definition not be written to (At least not digitally. What you do with a hole puncher and a neodymium magnet is your own business). Examples:

  • Live CDs, from which you can boot into an operating system which will look the same on every boot.
  • WORM (Write Once Read Many) devices, used for example by financial institutions which have to record transactions for many years with no means of altering or deleting them digitally.
  • Writable partitions mounted as read-only. This can of course be circumvented by a program with root access.

Versioning file systems would be more practical, but are not common. Such systems might easily include options to transparently write each version of a file (or its difference from the previous version) to a WORM device or otherwise protected storage.

Both of these solve the underlying issue: Not losing the original data in case of encryption by malicious software.

l0b0
  • 2,981
  • 20
  • 29
  • 12
    It would be more accurate to say that versioning file systems are not *currently* common. VMS had such a file system, and it was comparatively common before PCs became ubiquitous. – John Bollinger May 15 '17 at 21:45
  • 2
    Good answer. This is the closest to a practically useable answer. I would love to have a physical lock on my data drive that prevents any writing. – Thomas Roy May 16 '17 at 04:12
  • 1
    You might be able to find a USB stick with a hardware write protection switch http://www.fencepost.net/2010/03/usb-flash-drives-with-hardware-write-protection/ – daniel May 16 '17 at 08:40
  • What this answer lacks is that "artificial" read-only devices can be mounted or physically altered (via a switch) to enable writes. CDs and DVDs become impractical if you wish to continuously write changes. Versioning file systems can not guarantee that a previous version will be available and "untouched". – dark_st3alth May 16 '17 at 12:30
  • 2
    Even a read-only medium like a CD-ROM isn't immune from an attack against a naive victim -- the attacker could use a kernel module that makes it _appear_ that a particular disk is encrypted or otherwise altered, enticing the victim to pay money to "decrypt" the files, even though the files were not altered at all, and would be easily recovered on a different, uninfected, computer. – Johnny May 16 '17 at 22:05
  • @Johnny a simple restart would blow away any attack and revert the system back to whatever it was as far as as live cds go. Unless, you're using the term naive generously :) – RandomUs1r May 17 '17 at 15:29
  • "I would love to have a physical lock on my data drive that prevents any writing." Oh, those already exist. The power button. – Shane May 17 '17 at 21:35
  • @Shane For a system drive, the two would be almost equivalent, yes. But Thomas was specifically talking about his data drive :) – Luaan May 18 '17 at 09:07
  • @ThomasRoy Well, there's some ways of achieving similar effects. For example, Windows Embedded has an option that allows writes, but doesn't actually write them to the disk - when you restart the computer, you're back where you started. If you run your computer virtualised, you can mount your user system on drives that use snapshots, logs, prevent writing alltogether... there's plenty of options. Of course, it still doesn't work all that well for games, but otherwise it's not that hard to setup and maintain. – Luaan May 18 '17 at 09:09
  • NTFS (and btrfs, and HFS+) have some versioning support; but they do not protect against programs that have administrator privileges. – jpa May 18 '17 at 10:07
27

Loads of file systems don't have native file system level encryption support. Software-encrypted files can be stored on any file system though, just like any other file. The file system cannot tell the difference between random data and encrypted data.

Is there a way to permanently disable any sort of encryption at the OS level?

Not so long as code can run and write files to disk.

Or is it fundamentally impossible to prevent?

Without sacrificing basic functionality, yes.


You've tagged your question ransomware though. What you may be looking for is information on application sandboxing or heuristic-based ransomware detection.

Alexander O'Mara
  • 8,774
  • 6
  • 34
  • 38
12

There seems to be a misconception between encryption and file systems.

The two are independent, one can do encryption without having a file system, and one can have a file system without doing encryption.

For instance, traditional FAT16/FAT32 file systems do not "support" native encryption like NTFS does with it's EFS sub component. That doesn't mean however, one couldn't change data that is already committed, nor write data to the file system that is encrypted.

It is entirely possible to have a "write only" file system, or "read only" drive, but this still doesn't prevent someone copying the data, encrypting it, and then keeping it elsewhere. You can certainly prevent the deletion and write-over of data already on disk (files in Windows can be locked by an active process).

dark_st3alth
  • 3,052
  • 8
  • 23
5

A file system is basically a service provider for the operating system (e.g. it provides a way to permanently store and retrieve data on underlying storage media), and the operating system offers this service to any program running on the computer.

It's not in a file system's basic job description to take care of encryption, and while there are some file systems that offer native encryption, it's usually other layers that take care of it.

Ransomware doesn't care about any of that, though. Even if you have a file system that doesn't do native encryption, and even if you've removed any additional software that provides an encryption layer (such as Truecrypt, Veracrypt etc), the ransomware code itself will use the interface the OS provides to access the files on the filesystem and encrypt them. There's nothing that will reliably protect you from that except diligent backing up of your data, so that you can recover if necessary.

Out of Band
  • 9,150
  • 1
  • 21
  • 30
  • 2
    Caveat: Especially nasty pieces of ransomware will aim to encrypt your backup, too. – Peter May 14 '17 at 19:27
  • 2
    Yes - you'll need to implement a generational backup scheme, and not keep all your generations on the same backup medium. Also, it would pay to run some easy statistical checks when backing up your data; if it looks completely random (meaning it's encrypted), you should immediately stop backing up and instead restore your system to a known safe state. – Out of Band May 14 '17 at 19:50
  • @Pascal your checks may have to specifically look for the headers of large compressed files as these will have a large proportion of seemingly random data. But of course ransomware may well write headers to the encrypted files. And randomness is hard to measure at the best of times. – Chris H May 15 '17 at 12:29
  • Randomness is fairly easy to measure: send data through a data compression library and if it doesn't compress at all, that's a good sign of randomness. Another one would be an even distribution of all byte values. Another heuristic that could cause an alert would be a decline of files identified as text files compared to an older histogram. As for compressed files: you could simply decompress them and check the contents, just like virus scanners. But you're right that if you were dealing with data that mostly consisted of zipped data, you'd need to be careful. – Out of Band May 15 '17 at 18:40
2

Ideally, encrypted data should be indistinguishable from random data. This isn't a matter of "hiding" whether or not encrypted data is present (that's steganography; more on this later) but a matter of ensuring that an attacker can't find patterns in the data which could, in turn, be used to figure out the key to decrypt it.

This causes problem for your desired system, because random data could, in theory, contain any sequence of bits. It's not necessarily likely that a random string of bits would turn out to be, say, a random JPEG, but it has happened. Combine this with steganography, where data is hidden inside other data (often used to hide encrypted data inside other unencrypted data), and the situation looks more grim for your scenario.

Because of this, there isn't really a way to tell whether a given piece of data is encrypted or not, or contains another piece of data that might be encrypted. The closest you could get to a filesystem that absolutely cannot contain encrypted data is one that cannot contain any data at all, and there are very few uses for a filesystem like that.

The Spooniest
  • 1,637
  • 9
  • 10
  • 2
    _It's not [...] likely that a random string of bits would turn out to be [...] a random JPEG, [but it has happened](http://riii.me/5knk0)._ For a **very** broad definition of "random", yes. The actual process was a bit more complicated and was 'pushed' towards a valid JPEG file. There's still a very, very, _very_ slim chance for any random sequence of bits to be a valid JPEG (where it depends on what you define a 'very, very, _very_ slim chance' to be) but pulling random bits from thin air without this extra 'help' wouldn't have resulted in JPEGs _that_ quick. – RobIII May 15 '17 at 15:19
  • 1
    +1 one thing to add is that compressed files also happen to look like random data, just like encrypted files do (especially if the reader doesn't know the compression algorithm). – Peter May 17 '17 at 13:55
1

Like others have said, you can't prevent encryption at the filesystem level, but the closest alternative that I've not already seen mentioned is Mandatory Access Control.

Basically, you can set extra permissions up so your applications that have access to the Internet have extremely limited access to your disk. Your web browser, for example, could be set up to only be able to write to its settings, cache, and a downloads directory. Any attempt by that process to write outside those folders would be denied by the operating system.

So what happens if a vulnerability is exploited is instead of your entire disk getting encrypted, only your downloads directory can be encrypted. Of course not 100% safe (nothing is), but it's another layer that attackers don't usually expect.

The downside is it's tricky to set up and inconvenient to maintain, and feels pointless after a while when the high-profile attacks leave the public eye.

Karl Bielefeldt
  • 423
  • 2
  • 8
0

I am guessing your reasoning is because of the concern of being vulnerable to ransomware. If that isn't the case, then my answer also becomes "why"? All the file systems that support encryption give the options to not use it. Thus you don't have the overhead slowing down your file accesses, and don't have to worry about disk corruption making your entire file system unreadable.

I don't think any of the ransomware varieties leverage the file systems encryption capabilities. They do their own, so disabling encryption at the file system level won't help.

To prevent ransomware from being able to encrypt the files, you'd have to make your file system read-only, or fiddle with the permissions. You could take away your ability to write/modify/delete your data files, and specifically give yourself those rights only when you want to update a file. Not very feasible. I did a little proof of concept recently with a lockbox type of app, that would remove all change permissions to a folder making it read-only to you, administrators, etc. Then when you want to enable your ability to change files, you unlock the folder.

I didn't really go much farther with it because I still don't see that being feasible from a user perspective.

The best protection from ransomware continues to be to make frequent backups of your files. Windows 7/10 have the "previous versions" functionality which is nice because it keeps a history of the files as they change. So if your files become encrypted, deleted, etc. a week ago, but you didn't notice until today, you will still find the undamaged versions.

You could rig windows 7/10 to trigger UAC when your data files are changed if you put those files under "program Files" folder.

Thomas Carlisle
  • 809
  • 5
  • 9
0

If the concern is about preventing files from being made inaccessible, a versioned filesystem (like VMS had), set up so deleting old versions is not trivially possible, is probably among your best bet.

rackandboneman
  • 975
  • 4
  • 9
  • 1
    isn't this covered by the accepted answer? seems more like a comment since it does not address the encryption part but points to tangent – schroeder May 17 '17 at 06:59