5

Is there an accurate method or formula to convert risk scores between the OWASP Risk Rating Methodology (Overall Risk Severity) and the CVSS v1, v2 and v3 models) base score)?

As well as converting scores between the different CVSS versions? For example convert a CVSSv1 score to a CVSSv3 score or visa versa.

Bob Ortiz
  • 6,234
  • 8
  • 43
  • 90

2 Answers2

7

I am the lead architect of a very popular vulnerability database and we face similar problems. At the moment we have nearly 90'000 vulnerability entries with a CVSSv2 base and temp score. We are adding CVSSv3 scores and trying to convert most of the old data. I am going to discuss this transformation only to illustrate the basic principle of such.

The most important aspect is: Do not try to convert the scores themselves - Transform the vector into the new format and re-calculate their new scores instead.

If you take a look at the user guide of CVSSv2 and the specification of CVSSv3 (generic link might change in the future) you can see that some of the base vectors might be transformable:

  • AV (P is new)
    • N => N
    • A => A
    • L => (L|P) (usually L)
  • Au => PR (needs manual optimization)
    • N => N
    • S => (L|H) (usually L)
    • M => (H|L) (sometimes H)
  • CI / II / AI (see comments below)
    • C => H
    • P => L
    • N => N
  • E
    • H => H
    • F => F
    • POC => POC
    • U => U
    • ND => X
  • RL
    • OF => O
    • TF => T
    • W => W
    • U => U
    • ND => X
  • RC (some deviation)
    • C => C
    • UR => R
    • UC => U
    • ND => X

But there is some additional complexity regarding other vectors:

  • AC in v2 is now somehow split into AC and UI

  • Even though CI, II and AI stay the same, v3 has added S. In most cases a CI:C/II:C/AI:C might promise a S:C sooner or later which might be derived by default.

If you follow this definition you might be able to convert approx. 97% of all issues without touching them manually. The deviation in accuracy during such a transformation is usually very small.

Marc Ruef
  • 1,060
  • 5
  • 12
-1

Just use Open FAIR instead of CVSS and the Owasp Risk Rating Methodology. Forgo any old ratings you have and definitely avoid the vendor-driven scores. There are some nice facets of the OWASP Risk Rating Methodology (a major consultancy I worked for a few years back used it to great success with our clients) as well as CVSS (especially v3), but I think FAIR speaks to risk committees, board of directors, other executives, auditors, regulators, and anyone else who needs to join the growing conversation around cyber risk.

FAIR, or the Factor Analysis of Information Risk, is well-documented in the book, Measuring and Managing Information Risk -- and the book goes into detail why NIST SP 800-30, FIRST CVSS, and non-standard risk language found in all of the other frameworks aren't as sound as something like FAIR. Parts of FAIR are set in stone, while other parts, such as the vehicles for calculation (i.e., PERT and MC VaR) are changeable (e.g., PERT with P-Box, MC VaR with Bayesian Networks).

atdre
  • 18,885
  • 6
  • 58
  • 107
  • 2
    The OP was asking about converting between other ratings though. Introducing a 3rd system does not address the question IMO. – Mike Goodwin Jun 17 '16 at 17:55
  • @ Mike: That's true, I did not address the question directly, however, my answer is an alternative to a complete solution to the questioner's problem set – atdre Jun 17 '16 at 18:20
  • There are many reasons why CVSS and OWASP Risk Ranking are not compatible with FAIR (or each other) and I think the resources I provided cover that specific topic in quite accurate detail, such as the fact that CVSS and the OWASP Risk Rating Methodology (like NIST SP 800-30 and others before them) utilize non-standard risk language and invalid (e.g., unvetted) scoring methods – atdre Jun 17 '16 at 18:22