3

I was poking around one of Google's lesser known programs, Build with Chrome, when I noticed something pretty funny with their search bar. When you type in the standard XSS payload <script>alert(1);</script>, the notification pops up as if the site were vulnerable to XSS. But here is the funny thing. If you change it at all, (like to <script>alert(2);</script>), then nothing happens.

After puzzling over this, the only thing I can come up with is that some snickering developer at Google thought it would be funny to send a bunch of people down a bunny trail, and inserted an if statement in the search that said if somebody searched for <script>alert(1);</script>, it would display a notification with '1'. Is this a wise security practice?

It seems to me that there are two sides to this. One the one hand, if attackers are so focused on a fake vulnerability, they will be less likely to find the real ones. But on the other hand, I bet Google has gotten over a thousand reports for the bug bounty program just on this one vulnerability. This makes it harder to glean out the good reports.

So here is my question: is it a good security practice to use sneaky little fake vulnerabilities to protect yourself, or is it more work than it saves?

B00TK1D
  • 301
  • 2
  • 12
  • 4
    I bet it triggers some automated system to pay extra attention to you, too. – Xiong Chiamiov Mar 20 '17 at 21:59
  • I have heard of research done into using vulnerability 'simulation' that is triggered exactly like this, and causes the attacker to be flagged for extra attention and perhaps time wasting. In my opinion, this is an obvious attack surface increase - so it is very important that, if implemented, it is done well. – MiaoHatola Mar 20 '17 at 22:43
  • This question currently has two close votes (opinion-based). You might want to rephrase it a bit, as I think that it's mainly an issue of your formulation ("I am curious what people's opinion is"). "Is this a known defense mechanism", "What is this defense mechanism called", or "What are the security ramifications of this approach" might all be better questions. – tim Mar 21 '17 at 10:22

1 Answers1

1

In a way, this is pretty similar (in 'effect' and in intention) to running a honeypot: you gain advance notice of who might be poking around, and to some extent, gain some insight into their capabilities (obviously, in this case, you don't learn all that much).

It may also allow profiling attackers to some degree: unsophisticated actors would either move on(because their scanning script checks for a bunch of things before spitting out a report), or if a human was at the keyboard, maybe do what you did, and test out a few things (in which case, you can indeed bet your testing would have caused your activities to be 'looked' at a bit more closely).

Sophisticated actors might have a script that followed up on such a result, which might expose the details of their script (or thinking).

So I would assume it isn't "just" a dev having some fun - there is almost certainly some kind of purpose to this (it may be that purpose was defined post-fact, but still).

And yes - considered as a honeypot of sorts, I would tend to think this is worth it on balance - I am sure it is reasonably easy to create filters for any reports this might generate.

iwaseatenbyagrue
  • 3,631
  • 1
  • 12
  • 24