25

Let's say I found a possible vulnerability in a security system. The system has been universally considered sound for years and nowadays is used worldwide.

I am not an expert in security, but there are things that worry me:

  • Using a security system believing it is safe is worse then not using one, as I completely rely on its security for whatever reason;
  • The system is implemented in different countries, therefore revealing details about why it is not safe may compromise those who do not update their systems straight away;
  • I would like to take credit for my work / discovery, but in the same time the discovery may be too big for me;
  • There is currently no real alternative to this security system since it has been considered the best for years and nobody expended time and resources to find a better one;
  • Revealing problems without bringing a solution seems like saying to the scientific community "Hey, everything you considered safe is not, hurry and find a better solution".

For all the reasons above, I wonder what should someone do in such a clunky situation.

Forgive my vagueness, but I think you understand the reason.

Anders
  • 64,406
  • 24
  • 178
  • 215
Jacob
  • 269
  • 3
  • 4
  • 20
    A bug bounty program such as the [Zero Day Initiative](http://www.zerodayinitiative.com/) will responsibly handle disclosure and pay you along the way – Neil Smithline Jun 30 '16 at 00:32
  • 20
    You should are not responsible for providing a fix. It is common for security researchers to not provide fixes. Responsibly disclosing the vulnerability will be more than enough – Neil Smithline Jun 30 '16 at 00:35
  • 27
    Bets are what you are thinking is a security flaw actually isn't. Either because you got the reasoning wrong, or simply because that threat isn't actually "real". When you design a security system you take into account its purpose and which kind of threats it should handle. It may well be that the one you found was considered during the design phase and not considered significant for the system (e.g. because exploiting it isn't trivial and requires a whole lot of work, like having physical access and huge computing power at hand etc.) – Bakuriu Jun 30 '16 at 09:02
  • 9
    For example: some time ago there were a few articles about "security flaws" in KeePass. these "flaws" were simply: a keylogger and log the master password. But the whole point of KeePass is *not* to protect the user from an infected local machine in the first place. – Bakuriu Jun 30 '16 at 09:05
  • Wait, I know what this is. You're talking about the common deadbolt lock, and you've discovered bump keys. –  Jun 30 '16 at 21:06
  • Agree with @Bakuriu; there's many, many, cases where people think they've found a flaw which isn't one. If you [google for "it rather involved being on the other side"](https://www.google.com/webhp?q=%22it+rather+involved+being+on+the+other+side%22+site:blogs.msdn.microsoft.com) you'll get some great examples from Raymond Chen. – RobIII Jun 30 '16 at 23:15
  • I quite doubt that there really is "no real alternative to this security system". When a system (eg. a webpage) is beyond repair (and can't simply remove it), you add security measures a layer below: http authorization, filter the allowed IPs in firewall, require accessing through a VPN… Not perfect, but generally serves as a workaround. – Ángel Jul 01 '16 at 00:23

4 Answers4

41

It would be a matter of opinion on how you should proceed. We already have a question explaining different ethical ways to report a vulnerability.

First off, for something this big I would personally recommend you remain anonymous at first, while leaving a way to later prove it was indeed you who discovered the vulnerability. Create a brand new PGP key (not tied to your identity), sign your messages with it and publish them anonymously (over Tor for example). If you later feel confident everything's fine and there are no hundreds of blood thirsty lawyers coming after you, you can use that key to sign a message saying that it was you (with your full name).

I recommend the responsible disclosure approach : you get in touch with the developers of said security system and leave them some time to deploy a patch.

Should you get no response, a stupid response or even hordes of angry lawyers barking at you, just use your anonymity to go public and let everyone know how that company deals with security flaws.

I disagree on the fact that disclosing a flaw while not providing an alternative is a bad idea. That flaw may already be known to the bad guys and it's better that everyone knows about it (and can at least know it's not secure) rather than letting the bad guys quitely enjoy using that information. Also, even if there is no alternative now, this discovery may motivate someone to actually make a secure alternative.

Finally, while I do not know what kind of "security system" we're talking about, security should be done in layers and any corporation which relies on that system as its only defense deserves to go down, just like those who don't bother to update once a fix becomes available.

André Borie
  • 12,706
  • 3
  • 39
  • 76
  • 5
    " security should be done in layers and any corporation which relies on that system as its only defense deserves to go down" - dammit, should we roll our own TLS (at least for passwords) complete with one-use salts for the case that HTTPS should be found vulnerable to eavesdropping? – John Dvorak Jun 30 '16 at 04:08
  • 5
    @JanDvorak: I think André's point is not to develop new security mechanisms, but to use **more than one** of the already proven ones. – hamena314 Jun 30 '16 at 07:33
  • 1
    @hamena314 ok, but should I download a Javascript crypto-library so that I could hash the passwords that I send over HTTPS? – John Dvorak Jun 30 '16 at 07:39
  • 5
    @JanDvorak I think that that isn't the case, but, you should secure your application against XSS (for example). Also, you could do other things like rate-limiting and request-throtlling (misspelled). – Ismael Miguel Jun 30 '16 at 09:36
  • 4
    @JanDvorak: Given that the crypto library would be load over HTTPS it wouldn't provide additional safety. – Christian Jun 30 '16 at 16:48
8

You will want to verify that this is an actual vulnerability, typically by creating a Proof-of-Concept. From there, you could:

  • Contact the vendor of the software in private, explain your findings and the problems associated with it, attaching the PoC for them to analyze.
  • Release the vulnerability to the public, hoping that the extra attention will persuade the vendor to release a fix.
  • Be completely silent about it, hoping blackhats do not find it.

The first option is probably the best. You do not need to disclose your personal identity to release information. I have used several handles and throw-away accounts for such communications under similar circumstances.

The second option is risky. You risk public backlash ("Why would they give an open vulnerability to the people who would abuse it?") and you put yourself and the vendor in a bad spot. While some people may applaud your efforts, others will see this as a problem.

The third option is even more risky. If you have the capability of creating a PoC, then so do the people with malicious intent. The only difference here is your tried and true Security Through Obscurity aka a really bad time.

Your best bet is to start with the first option, falling back to the second if you get completely ignored. I have seen stories of people contacting vendors for months, finally releasing the vulnerability to the public, and then seeing a patch addressing the issue. In an ideal world, the vendor will take this seriously and address the problem once you contact them. Unless they are negligent to the issue or find that it is a non-issue, they will be the best ones to address it. If they fail to address it, they run into the real risk of becoming the next Java or Flash aka completely untrusted to the public.

That's typically bad for business.

5

You want to take credit but you don't want to be known? That just won't work.
Decide what's more important: Credit, or making the vulnerability known anonymously.

If making it known is more important, I see you have a fresh account etc., that's good. (if it is really that big I hope you have a fresh OS on a new device, no browser profile, and Tor too. If not, starting now is too late.).
As next step, if you are not even sure if it is a vuln, just telling us the situation here is a start.

About the "not telling because exploitable" part: Well, if you're concerned about that, and your identitiy, I see no way to tell it anyone. In this case ... just forget it?

techraf
  • 9,141
  • 11
  • 44
  • 62
deviantfan
  • 3,854
  • 21
  • 22
  • 3
    Check out my answer, there is a way to remain anonymous while having the possibility to prove it was you should you choose to do so. – André Borie Jun 29 '16 at 23:13
  • @AndréBorie Well, but this still includes giving up anonymity... – deviantfan Jun 29 '16 at 23:42
  • 3
    Wih my solution it depends on how you guard the private key. If you don't get it stolen and don't reuse the key your anonymity remains fine. – André Borie Jun 30 '16 at 00:18
  • @AndréBorie My point is, if he really never wants to give up anonymity, he doesn't need that... – deviantfan Jun 30 '16 at 00:25
  • 2
    @deviantfan The understanding is that OP wants to give up anonymity later if things go well, and remain anonymous if things don't. – John Dvorak Jun 30 '16 at 04:11
  • 6
    @AndréBorie Pick some text with your name in it (like "I am John Smith, johnsmith7@example.com. On 01/07/2016 I disclosed the Toothbooger vulnerability. 45tiy3wiucghcfq2roxq3wtv546fuo623" (and use actual random text rather than keyboard mashing)). Save this text somewhere very safe, then hash it, then include the hash in the disclosure. If you later wish to become nonymous (?), release the text and the hashing algorithm used. – user253751 Jun 30 '16 at 07:39
2

Thebluefish omitted to mention "Responsible Disclosure" where a trusted third party provides an escrow service for a limited amount of time, allowing the vendor to fix the fault while ensuring the discoverer is credited. CERT sponsor such an approach.

But per the previous answer you need a proof of concept. Sadly, "security" products more often introduce vulnerabilities than most professionals expect.

techraf
  • 9,141
  • 11
  • 44
  • 62
symcbean
  • 18,278
  • 39
  • 73