14

What risk rating methods, models, assessments or methodologies are used for calculating or estimating a risk score of vulnerabilities (for example, like described in the OWASP top 10) and which of those are best to use for web vulnerabilities?

I'm aware of the following three:

  1. OWASP Risk Rating Methodology,
  2. CVSS (version 1, 2 and 3),
  3. Open FAIR (thanks to @atdre).

Do other models or methodologies for an equivalent purpose exist (or currently in development)?

Bob Ortiz
  • 6,234
  • 8
  • 43
  • 90

7 Answers7

5

The Factor Analysis of Information Risk (FAIR), or any Value-at Risk (VaR) model, whether based on MC (Monte Carlo Method), Bayesian statistics, or other sound variable-crunching, model-bound, formulaic risk analysis -- any of these will culminate is a more-efficient risk calculation.

If you are a member of ISC2 (e.g., CISSP), you can check out CyVaR by PivotPoint Analytics (used to be CyberPoint Intl but now a division of) -- http://go.pivotpointra.com/lp-isc2-member-benefit

CyVaR, similar to RiskLens, is a way of engaging information risk in a VaR model. There is also a ton of information in the following three books (increasing in efficiency):

  1. Measuring and Managing Information Risk
  2. Cyber-Risk Informatics: Engineering Evaluation with Data Science
  3. How to Measure Anything in Cybersecurity Risk

Book one would be the most-actionable of the three today, but that may change soon. In the book, you will commonly see vulnerabilities treated as a single variable called Vulnerability, which is a variable among several that bubble up to FAIR's version of likelihood, what the FAIR standard refers to as Loss Event Frequency.

What you need to know about Loss Event Frequency (LEF) is that LEF dictates not only "how" likely, but "how often in frequency". A generic likelihood variable would leave a questioner hanging if an event is "likely" to happen today, this week, this month, this year, this decade, or some other unspecified date. It doesn't say how many exact events will occur. LEF changes all of this -- it gives the decision makers the power to understand how often and how many times (e.g., per year) a loss event will occur.

While I won't likely convince you in this single answer, the Vulnerability variable is difficult to control through mitigation (the Open FAIR standard refers to these as vulnerability controls) which would affect the underlying Difficulty variable. If a Threat Community's (TCom) Threat Capability (TCap) variable overcomes the Difficulty variable, then any TCom with a sufficiently-advanced TCap will cause Vulnerability to rise above 90 percent (often to 100 percent) and thus a loss event will almost certainly occur on this side of the LEF equation.

Let me put this in context for you. Let's say that the Dukes (aka CozyBear aka APT 29) decide that a web-app entry point will grant them a web shell against a target US-based financial services company. A few officers of the Dukes (i.e., handlers) have led a few confidential assets in Eastern Europe to believe that they are participating in an online carding forum. One of the assets has some webappsec experience and is handed a fully-cracked webappsec scanner, Acunetix, at the latest version. The asset runs it against a list of financial services websites provided to the actor by the handlers, who used a previous confidential asset team to conduct the reconnaissance necessary to build a list they are proud of.

After about three weeks, the asset is confident that many webappsec vulnerabilities could be used to upload a webshell, but some of them are SQL injection and have non-standard RDBMS backends, such as PGSQL and DB2. Perhaps one of these also has a web-app firewall (WAF) in place. The Eastern European asset does not have the TCap to bypass the WAF or to load a web shell through these vectors. However, after reporting through the forum (our secondary TCom) up the chain and back around to the Dukes (the primary TCom), the Dukes have the necessary talent to bypass the WAF (e.g., custom tamper scripts they built against the target's WAF using a modified version of sqlmap) as well as the understanding of how to build a web shell against PGSQL and DB2 backends. Thus, the TCap of the Dukes TCom is high enough to bypass all of the Difficulty.

In this case, the target web servers happen to also be connected to at least one Microsoft Domain through BeyondTrust PowerBroker or winbind, and the Dukes utilize the getent(1) utility to dump the users as well as laterally move into the network through SMBRelay and JASBUG. Let's say that the Domain has a trust relationship back to the Microsoft Forest that eventually gives the Dukes Forest Administrator access, and thus read-write to all domains including ones hosting services such as SAP ERP and SAP S/4 HANA, in addition to Oracle Hyperion, HMIS, Essbase, and JDE. These services also host databases that contain the employee, partner, contractor, and customer records -- in addition to financial records, company-wide financial services, and intellectual property. At this point, the Dukes install persistence layers that allow them to act as system administrators with an average longevity of 5 years (typically 3-9 years). At this point, the VaR is the entire company's top line growth -- all of its sales; all of its revenue-generating business.

Do you rate such as scenario as critical? Do you use bold or red in your compromise-assessment report? According to FAIR, the VaR should be shown in monetary amounts, such as US Dollars.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • What is the point of the elaborate example in the last three paragraphs? I don't understand how the detailed case description applies to the question of vulnerability scoring. – Sjoerd Jun 28 '16 at 08:20
  • 1
    In [this talk](http://www.irongeek.com/i.php?page=videos/bsidescleveland2016/100-morning-keynote-ian-amit) Ian Amit advocates using a VaR-model in order to communicate the business risk instead of the vulnerability impact. – Sjoerd Jun 28 '16 at 08:22
  • @ Sjoerd: The elaborate example is to provide context for how TCap only matters if you are facing only one TCom. However, when multiple TComs share or plan from each other's inputs and outputs it is possible that any-given vulnerability from the perspective of target organization is valued completely differently from a series of TComs. Thus, Vulnerability, as a variable to a VaR equation, will not usually allow the organization to properly affect the Loss Event Frequency – atdre Jun 28 '16 at 13:24
  • https://www.schneier.com/blog/archives/2019/11/resources_for_m.html – atdre Nov 01 '19 at 20:54
3

CVSS tends to be the risk rating model used in nearly all vulnerability reports.

One model mentioned in Microsoft's SDL is called DREAD

This is primarily used during the threat modelling stage to measure potential risks in software design, however it can be adapted to vulnerabilities too. Here is another useful link on DREAD.

Paradox
  • 188
  • 11
Colin Cassidy
  • 1,880
  • 11
  • 19
  • 3
    Perhaps you should mention that Dan Sellers from Microsoft said already in 2005 "DREAD is dead". So I would not recommend using it. – Frank Jun 22 '16 at 12:58
  • I also, do not recommend using scoring systems of any kind. Information risk (even from web application vulnerabilities) must be modeled using MC VaR or equivalent in order to be relevant to the business – atdre Jun 27 '16 at 20:26
  • I'm not necessarily recommending it myself, it is however another risk rating model, which is what the OP wanted. – Colin Cassidy Jun 28 '16 at 04:57
1

CVSS is very popular. Many organizations use CVSS, like Acunetix and IBM App Scan. Though Web Inspect and Burp Suite which are also web app scanner use the experience of their researchers to assign severity to a finding. CVSS v3 is a far improvement over CVSS v2 and is better suited for web application vulnerabilities.

one
  • 1,781
  • 3
  • 18
  • 45
1

In my experience with working on configuration management systems, these are the metrics that enterprises generally use apart from the ones that you mentioned:

You can do a threat assessment in following ways:

  1. Prepare a list of vulnerabilities that you want to test against
  2. Assign risk factors to each vulnerability
  3. Run the configuration management tool and you'll get your threat factor

Some example tools: Nessus vulnerability scanner and Qualys web application scanner

Limit
  • 3,191
  • 1
  • 16
  • 35
1

Have a look at the following risk rating models:

  1. Sans
  2. Comodo
  3. Wasc
  4. osstmm
  5. Cvss
  6. PCI
Paradox
  • 188
  • 11
Ash Roy
  • 324
  • 2
  • 8
1

There is also CWSS. The documentation also describes the differences with CVSS:

  • CVSS assumes that a vulnerability has already been discovered and verified; CWSS can be applied earlier in the process, before any vulnerabilities have been proven.
  • CVSS scoring does not account for incomplete information, but CWSS scoring has built-in support for incomplete information.
  • CVSSv2 scoring has a large bias towards the impact on the physical system; CWSS has a small bias in favor of the application containing the weakness.

I also heard that CWSS leaves less room for interpretation than CVSS, resulting in more consistent scoring.

Sjoerd
  • 28,707
  • 12
  • 74
  • 102
0

A little late to the party, but I just came across some other risk modeling frameworks today in this pdf, so I thought I'd add here them here just in case. Some of them seem useful, but as always YMMV.

  • Department of Homeland Security
  • NIST
  • CMS
  • Octave
HashHazard
  • 5,105
  • 1
  • 17
  • 29