4

How can I calculate a single loss expectancy without a given Exposure Factor?

Can someone please explain me?

schroeder
  • 123,438
  • 55
  • 284
  • 319
Diogo
  • 41
  • 1
  • 1
  • 2
  • Reading between the lines of SLE definition I believe that exposure factor must be a somewhat subjective measure that you have to estimate yourself. – Arran Schlosberg Apr 19 '15 at 03:49

1 Answers1

5

You cannot calculate a Single Loss Expectancy (SLE) without an actual, historical, estimated, or guess-estimated Exposure Factor (EF). I think what is lacking in most INFOSEC Risk Management training materials that cover quantitative analysis, is that they don't give much guidance how to translate the generic risk definition [risk = f(asset, threat, vulnerability)] into an EF and into the SLE and ALE formulas. I looked online just now, and I didn't see anyone that covered it well.

For a risk to exist there must be a vulnerability to exploit, and threats against that vulnerability. Those threats also have a probability of occurrence (which may be based upon observed attacks). The Threat Probability translates into the Annualized Rate of Occurrence in the quantitative analysis. So your EF mostly is based upon the vulnerability and its consequences to the asset when the threat occurs.

Many per-risk (meaning per-threat/vulnerability pair) EFs result in a 0 EF or a 1 EF which reduces some of the risk analysis workload. It also helps sometimes in doing EF estimation to also consider any mitigators what are put in-place to help reduce or eliminate the vulnerability.

Some simplistic examples of trivial 0 and 1 EFs:

  • Asset: an online-accessible bank account's balance

    • Threat: Hacker employs fishing emails to get bank account logins to drain accounts

      • Vulnerabilities: HUMINT: account holder is tricked to revealing their userid & password
      • Mitigators: none
      • Resultant EF to bank account balance: 1.0
    • Threat: Hacker employs fishing emails to get bank account logins to drain accounts

      • Vulnerabilities: HUMINT: account holder is tricked to revealing their userid & password
      • Mitigators: bank does not allow external balance transfers to be initiated online; bank does not show account numbers or routing numbers online
      • Resultant EF to bank account balance: 0.0
    • Threat: Hacker uses recent lists of stolen userid/password from a social media site

      • Vulnerabilities: HUMINT: many account holders use same passwords on all sites and AUTHEN: many sites (including this bank) use one's email address as a userid
      • Mitigators: bank has in-place two-factor authentication
      • Resultant EF to bank account balance: 0.0

For most other risks, one has to assess the vulnerability, the threat, and any vulnerability mitigators to decide upon an estimated EF. If one does not have a lot of real observed data to base the EF depending upon the risk, then these individual SLEs can be wildly out-of-line. When rolled up into aggregate Annualized Loss Expectancies, it could have a very large margin of error due to all the poorly estimated individual EFs.

However using the banking industry as an example, for a bank that has been in-operation for many years, they have detailed historical loss data (including cyber-related losses). A bank can actually calculate these values (EF, SLE, ARO, ALE) quite accurately for their history-to-date, and then use them for predictions of future losses.

Also, given that detailed loss history, banks can do relatively accurate what-if cost-vs-benefit analysis of implementing new mitigators (such as two-factor authentication).

  1. Determine the total-cost estimate to implement and deploy that mitigator.
  2. Calculate the aggregate ALE given current EFs over a time period (say 10 years).
  3. Tweak any EFs that the mitigator affects.
  4. Calculate the new aggregate ALE over that same time period
  5. Calculate the difference between the new aggregate ALE and the current aggregate ALE (which is the hoped for benefit in that the new ALE ideally be smaller than the current ALE)
  6. If the benefit (loss reduction) is greater than the total-cost to implement, then do so; if the benefit (loss reduction) is significantly less than the total-cost to implement, then cost-vs-benefit analysis would recommend to not implement the mitigator.
cybermike
  • 837
  • 7
  • 8
  • What "academic" area deals with these estimations? It seems quite actuarial in nature. – Arran Schlosberg Apr 19 '15 at 11:47
  • Many academic areas utilize and do research on risk analysis. Financial/Insurance risk is a prime example, and having earned my MBA I know risk analysis is part of that curriculum. Risk management is the actual basis of all cyber-security from its beginnings, and as a certified INFOSEC Risk Assessment Professional I know it is taught in the Computer Science curriculum. Risk analysis is also likely part of Human Behavioral Science, Disease Management Science, and many others. – cybermike Apr 19 '15 at 12:04
  • Thanks. It struck me as being a rudimentary equivalent of premium pricing in general insurance (cost of a claim x probability of said claim). – Arran Schlosberg Apr 19 '15 at 22:58