72

Is image blurring an unsafe method to obfuscate information in images?

I.e., is it possible to "de-blur" the image, if you know the algorithm and the setting, or by trial & error?

For instance, the image below is the Google logotype blurred with the Photoshop CS6 Gaussian Blur filter @ a radius of 59.0 pixels.

Google logotype with Gaussian Blur

For the naked eye, it could be difficult to figure out the blurred content. But could the blurring be "reverse engineered" to reveal the original image, or at least something that is recognizable?

P A N
  • 869
  • 1
  • 6
  • 8
  • 18
    Possible duplicate of [Is blurring face secure?](http://security.stackexchange.com/questions/62529/is-blurring-face-secure) – WhiteWinterWolf Jul 10 '16 at 13:40
  • 1
    @WhiteWinterWolf - Hard to say since this is more general. Blurring faces is different from blurring things like text. – rovyko Jul 10 '16 at 17:05
  • @dexgecko: The answers do not seem to limit themselves to blurring faces. Actually, the only picture as example is precisely showing blurred text. And at the end, all answers conclude in saying that blurring is not safe, so it won't be safe for faces, text, people, objects, logos, landscapes, maps, drawing, etc., I'm not sure there is a lot of value in keeping each situation in separate questions. – WhiteWinterWolf Jul 10 '16 at 18:05
  • 10
    @WhiteWinterWolf the literature indicates that it is much safer for content with high entropy (faces, landscape... i.e. "photos") than for text/numbers since it is much simpler to enumerate all possible cases for the latter (especially when given known context). – Jedi Jul 10 '16 at 18:17
  • 9
    @Jedi: It depends, [some people](http://yuzhikov.com/articles/BlurredImagesRestoration1.htm) still seem to obtain nice results with landscapes. – WhiteWinterWolf Jul 10 '16 at 18:27
  • 1
    @WhiteWinterWolf it isn't the results that are necessarily better. From the point of view of an exact match, it is much harder, but in images like the ones in the article, close enough is indistinguishable to the human eye. – Jedi Jul 10 '16 at 18:30
  • Actually without reading the text I personally could identify it as the Google logo. So that alone makes it unsafe. Of course it does matter how well known the object to blur is and how much you blur it – Ivo Jul 10 '16 at 22:31
  • I remember a story from years back about how faces were... hidden... swirled in some manner in pictures. It made for some embarrassing moments when those faces were "unswirled". I can't seem to find the story, but I imagine anything that can be done can be undone. Google: image unblur finds a wealth of techniques and image examples. – WernerCD Jul 11 '16 at 03:36
  • 6
    Eureka! https://en.wikipedia.org/wiki/Christopher_Paul_Neil - hid his face and got caught with a reversal – WernerCD Jul 11 '16 at 03:39
  • 1
    Related question: http://security.stackexchange.com/q/126932/47143 – kasperd Jul 11 '16 at 08:37
  • 22
    Can't believe no one's suggested the best method, which is to shout "Enhance!" at your computer. – sethmlarson Jul 11 '16 at 13:30
  • Another interesting article on this subject [here](https://www.wired.com/2016/09/machine-learning-can-identify-pixelated-faces-researchers-show/) – Jedi Sep 12 '16 at 16:54

7 Answers7

67

Is it possible to "de-blur" the image, if you know the algorithm and the setting, or by trial & error?

Here, I assume we are only considering images which were blurred using a filter applied to the image, and not as a result of a poor capture (motion/optical blur).

Deblurring definitely is possible, and you will see support in many image processing tools. However, blurring intentionally reduces the amount of information in the image, so to truly get back the original image could require "brute force", whereby a (humungously) large number of candidate images are generated, which all "blur" to the same final image.

Different types of blur have different losses, but it is possible to reverse all of them (albeit expensively). The cost of deblurring and the number of possible outcomes depends on the number of passes taken by the blur filter, and the number of neighbors considered while blurring. Once deblurred, many tools and services should be able to automatically remove many of the outcomes, based on knowing what type of image it is.

For instance, this blog post talks about why blurring content with a low amount of entropy (e.g. check books) is much less secure than blurring something like a human face.

In short, it is indeed possible to get back an image that if "blurred" will result in the same image that you provided. But you cannot guarantee that the deblurred image is the only valid deblurred version (you will need some domain knowledge and image analysis like matching edges, objects making semantic sense).

For the naked eye, it could be difficult to figure out the blurred content. But could the blurring be "reverse engineered" to reveal the original image, or at least something that is recognizable?

It is possible that blurring does not fundamentally transform the "signature" of an image, such that the histogram is similar and allows matching. In your case, the human eye can actually make out that this could have been the Google logo (familiar colors) but the histogram is quite different. Google itself can't identify the image and you can study the histogram and color clusters using this online tool-- the images are quite different.

If probably would be safer if you were to choose to black out the sensitive content (see post here)

I wish these things weren't possible (e.g. I used to try to go as fast as possible near speed traps so that motion blur would hide my number plates, but it never works anymore). Tools to deblur are fairly common now (e.g. Blurity) though they don't work as well with small computer-generated images (less information) as they do with photographs (see sample of what I recovered).

Failed deblurring

In terms of more references, the first chapter of Deblurring Images: Matrices, Spectra, and Filtering by Per Christian Hansen, James G. Nagy, and Dianne P. O’Leary is a really good introduction. It talks about how noise and other factors make recovery of the exact original image impossible: Unfortunately there is no hope that we can recover the original image exactly! but then goes about describing how you can get a close match.

This survey compares different techniques used in forensic image reconstruction (it's almost 20 years old, so it focuses on fundamentals).

Finally, a link to Schneier's blog where this is discussed to some detail.

Jedi
  • 3,906
  • 2
  • 24
  • 42
  • 13
    Why would you expect driving faster to produce motion blur to work against speed cameras? Even if you're willing to drive 50% over the speed limit to prove your point (which seems a bad idea to say the least), they're probably already designed to catch people going 20% over, so you're only causing 25% more blurring. So if they were capable of reading the plate in the first place, they likely still are, and deblurring is required, they'd have implemented it already. – Cascabel Jul 11 '16 at 00:27
  • 7
    @Jefromi [I was trying to bust myths](http://www.discovery.com/tv-shows/mythbusters/mythbusters-database/way-to-beat-police-speed-cameras/). – Jedi Jul 11 '16 at 00:30
  • 1
    @Jedi next experiment, wrap your car in [this](http://petapixel.com/2016/07/01/anti-paparazzi-scarf-makes-flash-photography-impossible/) and try to beat red light cameras. – DasBeasto Jul 11 '16 at 12:09
  • 1
    Sure, stay tuned for results. :-) The security question is "am I safe from other drivers?" – Jedi Jul 11 '16 at 12:48
  • I imagine it's even less safe if a face is blurred on a video? – gerrit Jul 11 '16 at 15:08
  • @gerrit theoretically you're right as you have many different angles and you can collate your "guesses" at deblurring. I've never seen a tool that identifies and deblurs objects in videos in action though. – Jedi Jul 11 '16 at 16:18
18

Yes, blurring is an unsafe way to censor data in images.

There are software that can easily reverse algorithmic blurring like gaussian blurs, to fairly legible result. Often enough to identify objects/read texts.

Lie Ryan
  • 31,089
  • 6
  • 68
  • 93
  • 2
    I guess it's highly depending on how deterministic the blurring algorithm is, and if it uses a RNG, how good it is. – vsz Jul 12 '16 at 06:33
  • 4
    @vsz: blur and Gaussian blur, AFAIK, are fully deterministic. Note that deterministic algorithm doesn't necessarily means easily invertible. A hash algorithm are fully deterministic, but cannot be inverted easily, for example. Depending on the blur radius and features sizes, blurs may be fairly easy to inverse, though it may not produce exact inversion. – Lie Ryan Jul 12 '16 at 11:00
  • I agree completely. Especially good point is the hash example. Still, my point was that randomness can help in reducing the useful information. – vsz Jul 12 '16 at 14:19
  • @LieRyan automated deblurring tools assume the blur is a convolution. "Hashing" "blurs" might be of those easier ones to deconvolve. – John Dvorak Jul 12 '16 at 19:12
11

It depends on two things: the image itself (amount of info), and the blur used (type+amount).

The Gaussian blur you mentioned re-distributes contrast (info) from where it's most or least concentrated into a diffusing circle around the contrast; more towards the center, less and less as you approach the edge of the circle (aka blur radius).

Instead of a digital image, consider a sand art image of a checkerboard on a rickety table. If you pound your fist down on the table, you mimic a Gaussian blur, which should round out the squares, leaving behind connected overlapping circles. Looking at that messy table, you could still probably conclude that it was a checkerboard before the shakeup.

On the other hand, if you pounded the side of the table, you simulate a motion blur. If the distance of the jolt / inertia of the sand grains exceeds the width of the checkerboard squares, the table will be uniformly covered in sand, and it will be impossible to say if the pre-shake design was a checkerboard, stripes, or an already uniform distribution.

If you only have a Gaussian blur available, and you want to obscure text, then you should blur by twice the line height and then posturize the image. Blurring spreads out big details into fine details, while posturizing discards fine details. You can also use something more dramatic that discards fine details to obscure the blurred image, reducing the color depth, crunching the levels, over-compressing, etc.

In short, if the details are spread out and then fine details discarded, there's simply not enough information left to reliably recover the image.

techraf
  • 9,141
  • 11
  • 44
  • 62
dandavis
  • 2,658
  • 10
  • 16
4

My experience shows that Gaussian blur in the GIMP is not enough to fully obfuscate information. In fact, it's possible to use deconvolution to restore most of the image data after Gaussian blurring.

bwDraco
  • 473
  • 2
  • 10
  • 2
    It's well-known that pixelation isn't good enough for structured data (eg. credit-card numbers). If you can deconvolve the image to get the pixelated image, you can then use standard techniques to get the original data. – Mark Jul 11 '16 at 22:07
2

Deliberately blurring a region causes information loss. What you can restore depends on the amount of information which got lost. This amount depends on the blurring algorithm and its parameters. But even if a single image does not contain enough information you might still recover information lost inside a specific image if you have similar images where the specific region was blurred too, but in a slightly different way (like different parameters or algorithm, slightly different region which got blurred, different scaling of the image...).

Thus you can not say that you could reconstruct all important information in all cases. But you too cannot say that blurring hides all information in a reliable way. If reconstructing the information is possible or not depends on the blurring algorithm, parameters and of course also what you consider the important information.

There is lots of research in this area which you can easily find when searching for deblurring images.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 10
    It isn't really the blurring itself that loses information, but that the result gets discretized into an 8 bit integer. – CodesInChaos Jul 10 '16 at 19:59
  • 7
    To expand on that, a true Gassian blur is perfectly invertible: you can recover the exact original image -- but a true Gaussian blur produces an output image that is infinitely large with infinite-precision color depth. The cropped, quantized Gaussian blur that image editors use is a lossy process. – Mark Jul 11 '16 at 03:09
  • @CodesInChaos: in theory you are probably right that there are blurring algorithms which do not have information loss. In practice the OP obviously refers to the process of blurring how it is done in practice with the aim of information loss. And the specific algorithm and parameters are only used as an example. – Steffen Ullrich Jul 11 '16 at 04:33
  • @SteffenUllrich What kinda blur aims for information loss? I've never heard of such a thing. – Navin Jul 12 '16 at 14:41
2

The term "blur" is used to describe many kinds of visual effect, including those which might be called "smearing" or "smudging". If an image which were blurred mathematically were stored precisely, it would be possible to reconstruct the original perfectly. One of the effects of blurring, however, is to make the data more sensitive to certain kinds of noise or sampling artifacts, so if one applies enough blur it may be possible to obscure the image to the point that undoing the blur would amplify the noise enough to leave the content obscured.

A better approach than blurring, however, is to "smear" or "smudge" the data; these terms are less well defined, but the basic essence is that some portions of the original image contribute a lot more to the final image than others, and in many cases some parts won't contribute at all. Many such effects can leave very hard edges that may be distracting, but such distractions can be reduced by applying a mathematical blur after smudging.

As a final note, "pixelization" approaches that involve mathematical averaging will destroy information, as will smearing or smudging, but may not necessarily be as secure as one might think. If one were to pixelate the accoount number on a check such that each digit was represented by about a 2x2 matrix, such numbers couldn't be "read" normally, but someone who could reproduce the camera alignment and pixelization settings might be able to figure out what the numbers would have to be to yield the observed pattern of light and dark squares. Such a problem could be avoided by digitally replacing the account number with a standard "test" account number which is guaranteed not to map to a real account and then digitally obscuring that (if a view could make out that the account number was something like "1234567890" that might be distracting, but if it was pixelated sufficiently that advanced reconstruction would be required, nobody who didn't go through such effort should be distracted by the fact account number).

supercat
  • 2,029
  • 10
  • 10
2

There is one case where blurring can definitely be reversed: If you are blurring computer generated text, for example screen shots.

For example, if the text above was blurred, and even if the blurring stretched over several characters, you could write software that tries out all letter combinations and find those that produce exactly the same pixel values as the blurred image when they are blurred. It works because in that case the actual information is only 8 bit per character or so, and not the 8x5x24 bits that the graphics of the character in an 8x5 box would contain.

(Another post mentioned credit card numbers; a photo of 16 digit number has less than 54 bit of actual information.)

gnasher729
  • 1,823
  • 10
  • 14