9

The outdated DREAD risk model (wikipedia) lists Discoverability as a criteria for judging the severity of a vulnerability. The idea being that something which is not publicly known and you would be unlikely to discover without deep knowledge of the application in question does not need as much panic as, for example, something with a published CVE (assuming there are no publish attack prototypes, since that bleeds into the Exploitablility metric).

I notice that CVSS v3.0 has no metric for how likely the vulnerability is to be independently discovered.

Wikipedia has this to say:

Discoverability debate

Some security experts feel that including the "Discoverability" element as the last D rewards security through obscurity, so some organizations have either moved to a DREAD-D "DREAD minus D" scale (which omits Discoverability) or always assume that Discoverability is at its maximum rating.

So my question is basically: apart from the very obvious "security by obscurity is bad", what are the arguments for and against using Discoverability as part of a risk analysis?

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • 1
    As an aside, CVSS is only used to rate vulnerabilities that have already been found and disclosed... so having a discoverability rating might be pointless. – nbering Jul 12 '18 at 22:14
  • @nbering You're sure there are no organizations that use the CVSS scale for assessments of vulnerabilities that are internally-discovered or otherwise not publicly-known? – Mike Ounsworth Jul 12 '18 at 22:18
  • 2
    I wouldn't say no one ever does... but it's designed by an organization that exists to report vulnerabilities, and co-ordinate mitigations on a global scale. I don't think they care about discoverability, since if they're rating it, it's already been discovered. – nbering Jul 12 '18 at 22:19
  • Fair; CVSS wasn't designed with 0days in mind. – Mike Ounsworth Jul 12 '18 at 22:20
  • 2
    That aside... it's an interesting question. From an academic stand-point, discoverability isn't provable, and difficult to rate. From a pragmatic stand-point, I can see it being considered when triaging potential issues with an otherwise similar impact. I've never used the DREAD model, myself. – nbering Jul 12 '18 at 22:23
  • @nbering - to be fair, other risk score factors can be hard to quantify as well. For example, "exploitability"/"attack complexity". Should it be based on how likely an attack is to be successful? How many steps are required to exploit it? The conditions required for the exploit to be possible? In some ways, discoverability could be objectively measured as the inverse of time between initial vulnerability and initial discovery. The longer it took to be noticed, the lower the discoverability. – Mr. Llama Jul 13 '18 at 16:35

3 Answers3

10

Low discoverability doesn't necessarily mean "security by obscurity". It could just mean that the vulnerability lies deep in a portion of functionality that's rarely ever investigated. It could also mean that discovery would require a corner case so narrow that even the initial discovery was unlikely to have ever happened. Such examples would be Dirty COW and Spectre/Meltdown, both of which took almost a decade to be noticed.

On the other hand, low discoverability shouldn't necessarily mean low priority. If a vulnerability with low discoverability is reported, other factors should be taken into consideration - such as impact and ease of exploit - in determining an appropriate response. In fact, such considerations are exactly why risk rating scores like DREAD and CVSS exist. However, as discussed in the comments, "discoverability" may only have meaning in the context of a private disclosure. If a vulnerability is already public, the discoverability is essentially 100% and no longer a relevant consideration.

Mr. Llama
  • 654
  • 3
  • 8
6

I'm on the side that doesn't like the use of discoverability. It's poorly defined and for any given definition, people are especially bad at guessing at a measure for it.

There exists a measure of time and attention at which every piece of software is vulnerable, and every vulnerability, including ones you don't know you have, are discoverable.

There are people who really know their attack surface and their threat actors, and can perhaps give a good estimate of some meaning of discoverability, but this is not the common case, and it is too easy to be surprised.

I have observed orgs having terrible vulnerabilities hiding in plain sight that never get attacked, maybe because there is always lower hanging fruit.

Also, I have observed that what many execs think of when intuiting a "discoverability" measure is- how bad is it for me if we're compromised because of this, where one end of that spectrum is Equifax (lose your job and everything else), and the other end is an oops, sorry, cost of doing business. That intuition in itself is not necessarily a bad way to prioritize, but it has nothing to do with a reasonable definition of "discoverability."

So I don't think it should have a role.

Jonah Benton
  • 3,359
  • 12
  • 20
  • 1
    +1 Interesting points that it often gets used as a proxy for "How embarrassing would this be?", and even when used properly, nearly impossible to estimate. – Mike Ounsworth Jul 12 '18 at 22:43
2

Yes. While DREAD might be outdated, other models include similar concepts and define them more rigidly. In FAIR, for example, the Vulnerability aspect is determined as the ratio of Threat Capability vs. Difficulty where Threat Capability means how capable your specific threat is (on a scale of 1 to 100) while Difficulty means the barrier it needs to overcome to be successful (on a scale of 1 to 100). The fact that you need considerable knowledge to discover the exploitable weakness would add a few points to the Difficulty.

The reason Discoverability is rightfully dropped from DREAD is that it is one of many factors of a subaspect of a subaspect of risk. It doesn't deserve the first-order standing it has in DREAD. A low discoverability knocks out low-level attackers and has little effect on more skilled attackers.

In addition, DREAD is a qualitative assessment method. As such, even one or two points in any category can shift the result into a different "box", further exaggerating the effect. In a proper statistical or quantitative model, the effect is much more gradual.

Tom
  • 10,124
  • 18
  • 51
  • I agree with all statements here. Unfortunately also, discoverability can be the defining factor for a low-level attacker. There are many bots out there cataloging servers for the software they run, just waiting for the right critical vulnerability to pop up in the servers they're cataloging. Things like the X-Powered-By header or version metadata endpoints on a CMS solution make it too easy. – nbering Jul 13 '18 at 06:12
  • A bot like that would not qualify as a low-level attacker. Someone is behind that bot and he knows enough about security to do such scanning. A low-level attacker would be, e.g. some disgruntled employee with no specific security knowledge, or a script kiddie just starting out or something like that. – Tom Jul 13 '18 at 06:14
  • Fair enough. That's a good bit of perspective. I've usually considered automated bot attacks and script kiddies to be low-level, but you are right. There are a few levels of sophistication below that. – nbering Jul 13 '18 at 06:18