91

There is a strategic question that we are banging our heads against in my IT department, which essentially boils down to this:

  • There is a type of attack against our systems that can cause a lot of damage if missed or not addressed properly. More precisely, this could cause a major blow to the company's operations and potentially ruin the entire business.

  • The probability of such an attack is very low. Nonetheless it does happen to other companies in the field regularly (however rarely). It has not happened to our systems yet.

  • In order to be able to mitigate the attack, we must hire another employee and spend an additional 8% (at least) of our budget every year. Both of which are significant investments.

  • Usually we gauge such problems by multiplying the probability of occurrence by the expected damage, but in this case we are lost trying to multiply a number tending to zero by a number tending to infinity to come up with cohesive answer.

  • Along the same lines, our team in divided into two camps: one thinks that the attack will never happen and the investment of time and money will be wasted; the other camp thinks that the attack will come tomorrow. Everybody agrees, though, that half-assed measures will be both a waste of resources and will not protect against the attack – we either go all-in or don't bother at all.

As the team leader, I see merit in both opinions – we may operate for the next 20 years without encountering such an attack, and we might have it today (out of the blue, as it usually happens). But I still have to decide which way to proceed.

In that regard, I would like to ask you whether you encountered such puzzles and what is the industry's approach to dealing with them.

Jarvis
  • 3
  • 3
David Bryant
  • 1,139
  • 2
  • 8
  • 10
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/80706/discussion-on-question-by-david-bryant-how-to-deal-with-low-probability-high-imp). – Rory Alsop Jul 26 '18 at 12:50

15 Answers15

75

Now usually we gauged such problems by multiplying probability of occurrence by the expected damage, but in this case we are lost trying to multiply a number tending to zero by a number tending to infinity and come up with cohesive answer.

Unfortunately this is what you need to do in this case. But I don't believe that this calculation is really as difficult as you make it out to be.

The risk can be estimated by first estimating how many companies face the same security threat and don't take the necessary precautions. Then you skim the news reports to check how many companies get affected by it each year (plus an educated guess how many of them managed to prevent the mess from becoming public). Divide the second number by the first, and you have a risk percentage.

Your damage does not tend towards infinity. The event which would come closest to "infinite damage" would be a collapse of the entire time/space continuum of the universe (and even that's an event where you could make a fermi estimate to quantify the damage in dollars, if you are bored and interested in astrophysics). The highest damage you might be able to cause realistically is bankrupting your company. Maybe you could cause even more damage if you also account for damages caused to other people. But when your company is bankrupt, it can't pay those liabilities. So you can use the net worth of the company as the upper limit of your expected damage.

Philipp
  • 48,867
  • 8
  • 127
  • 157
  • 81
    ....and because the formula is commutative, and you know the cost, you might instead start by calculating the threshold level of risk at which mitigation is cost effective - then there is an early get out to researching an estimate on risk. BTW I do hope my local nuclear power station don't cap their damage estimates on the value of their own company. – symcbean Jul 24 '18 at 14:58
  • 4
    @symcbean: Rest assured that the power plant has great insurance, if not with an actual insurance company then through the collective wealth of all tax payers plus the economy that backs the value of whatever fiat currency the government issues. ;-] – David Foerster Jul 24 '18 at 19:14
  • @DavidFoerster I might be wrong, but from what I remember, the plant carries "normal" insurance for damage it might cause within a country, and the government provides the insurance for what spills over the borders. – mbrig Jul 24 '18 at 21:41
  • 2
    @mbrig: At least where I live the insurances of nuclear power plants have an absolute cap depending on size/output (I think) and it isn't even *that* high. The plants are usually owned by subsidiaries of the big electricity companies to limit "upstream" liability. All damages above that are either covered by the tax payer or whoever suffers them. – David Foerster Jul 24 '18 at 22:11
  • 11
    True, the expected-value formula is mathematically correct in this situation. However, it's not really useful for dealing with high-cost, low-probability events. If you take the very low odds of an extinction-level asteroid impact, multiply it by the population of the Earth, and factor in a person's life expectancy at birth, you get the mathematically-correct prediction that you've got around a 3% chance of being killed by an asteroid. How much do you spend on asteroid defenses? – Mark Jul 24 '18 at 22:40
  • @symcbean I don't know if it's still the case, but oil companies used to arrange it so that tankers were owned by separate companies, that had exactly one asset - the tanker. This meant that in the event of an oil tanker disaster, the financial damage was limited to that company, which now had no assets. – James_pic Jul 25 '18 at 11:19
  • *Your damage does not tend towards infinity.* - For this case any number which is beyond the ability of the company to recover from is effectively infinity for this company. – Chad Jul 25 '18 at 16:10
  • 3
    @Chad No, it is not. A security measure which costs $100,000 per month might be ridiculous to protect a little mom&pop store from bankruptcy, but justified for protecting a global corporation from the same risk. But if you assume `bankrupt == $∞`, you wouldn't come to that conclusion. If you insert ∞ into the standard risk assessment formulas, then the math says that any measure which protects from bankruptcy is justified, no matter how small the risk. – Philipp Jul 25 '18 at 16:15
  • A solution that costs 10k a month might as well be $∞ a month... that is the point I was making. There is a limit for x in any business where f(x) = f(∞) where x < ∞ – Chad Jul 25 '18 at 16:23
  • 6
    @Chad another way to look at this: The risk for "the company" isn't actually what we care about. "The company" is not a sentient being. "The company" doesn't feel anything when it gets liquidated. What actually matters is the damage to the human beings who have stakes in the company. And when it is a limited company, then the damage is limited to what the company is worth. When it is an unlimited company, the damage is limited to what the company is worth + the personal wealth of the owners. – Philipp Jul 25 '18 at 16:25
  • @Philipp - I get what you are saying, I just disagree that your perspective is any more valid than the OP's perspective. Nothing in your answer invalidates the OP's perspective, it just appears to disregard that perspective as irrelevant. I think that detracts from your otherwise great answer. – Chad Jul 25 '18 at 16:28
56

A warning that this response is coming from theory, not experience.

  1. Are there ethical consequences regarding the impact of the bad event on others? Will it hurt your users? Also consider the employees of the company who may be hurt if it is ruined by an attack. If so, you may feel you have an ethical obligation to mitigate.

  2. If you do mitigate the attack, can your company use this as a selling point or competitive advantage?

  3. Insurance may not work if the damage will be to your company's reputation; insurance cannot really help you recover from this. It depends what form the damage takes, maybe insurance is a good option.

  4. Maximizing expected monetary outcomes is not necessarily a good way to make decisions except in low-stakes problems. Suppose you could take a risk with a 99% chance of bankrupting the company, but 1% chance of multiplying its net worth by 200. "in expectation" you double the company's value, but you will almost certainly be out of a job.

  5. Instead of expected value, maybe consider such objectives as long-term survival of the company. For instance, under the two options (mitigate or not), how long do you expect the company to be around? (a) If the company is struggling and an 8% budget increase is likely to bankrupt it, then you have no choice but to ignore the problem and hope you get lucky. If it survives this period and flourishes, you can invest at that point. (b) If the company is doing well, then it seems like it can afford to play things safe. (c) If somewhere in the middle, the decision from this perspective becomes difficult.

  6. It seems unlikely that the higher-ups would want such a decision made without their input or control...

usul
  • 657
  • 4
  • 6
  • 17
    Point 6. is the answer to this question. – Tom K. Jul 25 '18 at 07:32
  • 16
    The [Kelly criterion](https://en.wikipedia.org/wiki/Kelly_criterion) explains point 4. Essentially, you can maximize ev when the stakes are a very small share of your total bankroll. For higher stakes, a loss would affect your ability to earn in the future, so picking a lower ev option may be the optimal choice – JollyJoker Jul 25 '18 at 08:07
  • 6
    @TomK. Correct, but the OP needs to communicate the scenario (primarily point 5) to those higher-ups (in a form they can understand). – TripeHound Jul 25 '18 at 11:57
25

You should be taking into account the fact that your team knows about this attack vector.

If simply knowing about the vulnerability makes it easier to execute the attack, you may have a bigger problem than you thought. (For example, a hard-to-find backdoor known to your team.)

If that's the case, your own team members are high on the list of opponents to worry about, and the probability of being attacked may be much higher than it would be if your team didn't know about it.

Employees can and often do become disgruntled. Take that into consideration.

nerdfever.com
  • 351
  • 2
  • 3
16

You get insurance that would cover that risk. As it is very low-probability, it is hard to assess for you, so instead of creating individual insurance, you try having it rolled into insurance that covers more mundane risks. Basically you check what insurance you already have or probably need would be most likely to cover it, and if it doesn't, try negotiating an additional deal where it would come under coverage.

Insurance is about converting a lower summary risk (probability of event times monetary cost of mitigating the event) into a higher summary risk (probability of 1 times monetary cost of insurance) while converting a higher operational risk (probability of event times ultimate damages of all consequences of the occurence for the business) into a lower one (monetary cost of insurance).

This makes sense only for both insurance seller and buyer when the damage without mitigation is higher than the cost of mitigation, namely when the damage without mitigation would critically endanger continued business.

  • This has the additional advantage of pushing a lot of the responsibility for estimating the risk to the insurance company. They can say we can insure you but it will cost you. Or they can say you need these mitigations in place before we will assure you. Or they can differentiate the price depending on what mitigations you have in place. – kasperd Jul 29 '18 at 18:00
13

You have choosen the correct approach:

Now usually we gauged such problems by multiplying probability of occurrence by the expected damage...

and just faced its limits:

we are lost trying to multiply a number tending to zero by a number tending to infinity

I would say that you are facing an inacceptable risk (could cause a major blow to the company's operations and potentially ruin the entire business) with a very low occurence.

My opinion is that you are in front of a strategic decision. As the team leader, your role is to pass the problem to your boss, along with its elements: what will happen if your organization suffers that attack, how many attacks have been seen in recent years, what are the possible mitigations for that attacks and what is their cost. When it comes to an important risk and an important cost, the decision normally belongs the main boss.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
11

Your first step would be to make a proper, quantitative risk analysis. Other answers have already provided pointers here, I especially want to support the mentioning of "How to Measure Anything in Cybersecurity Risk", a brilliant book. You can also look towards FAIR as a quantitative method.

However, risk management doesn't start nor end with risk analysis. Especially for the "black swan" risks, other factors come into play. You cover these with defined risk appetite and risk criteria.

Your company needs to define its risk appetite, which states how it relates to risk at all (prefers to be conservative and avoid risks, prefers to be more aggressive and accept risks, etc.). This could define that risks beyond a certain impact are inacceptable even if their likelikood or frequency is low. Typically, risks that would certainly destroy the company fall into this category.

These are good candidates to mitigate via insurances, if possible. The "rare occurance but catastrophic impact" is the home territory of insurances.

The second definition you need is risk criteria which defines what kinds of risks that you are unwilling to accept for ethical reasons, reasons of legal liability or others. For example, you could define that risk to human life is inacceptable even if the quantification falls into the acceptable range. Or that a potential prison sentence for C-level executives is inacceptable no matter the quantified risk value.

With these three things done - risk analysis, risk appetite and risk criteria - you should have an actionable result. Your black swan will be either inacceptable by appetite or criteria definitions, or if not it is simply another risk that is to be treated like any other. The analysis may be especially volatile due to the low frequency and insecurity in properly estimating it, which is where a proper quantitative method that can take into account a) range of values and b) confidence of estimates, comes into play and will help you out. (* see below)

For example, in the risk analysis I usually do, I employ a Monte Carlo method and one of my outputs is a scatterplot of all the scenario results. Any black swans will show up as individual (and rare) outliers, and I can identify them and see them in context.

In the end, especially for high-impact risks, you prepare the decision. Someone with the proper authority makes the decision based on your information, so your duty is to give them the whole picture without overloading them with information not vital to decision making.


One more remark about risk treatment. You may look beyond the analysis and check possible treatment options. If you can identify a treatment that dramatically changes on of your input values with a small effort, it might be well worth doing it for the purpose of bringing your camps closer together. For example, if there is a measure that would reduce the impact from catastrophic-company-destroying to serious-but-survivable, and you can turn your black swan into a regular high-impact risk.


(*)

Since many people have never seen a proper quantitative risk analysis, what you are looking for is something that takes not one value as an input, but a range and a shape parameter. PERT is one method - you identify the lowest reasonably likely value, the highest reasonably likely value and the most likely value. Also known as optimistic-realistic-pessimistic. Another approach is to explicitly specific a confidence value that shapes your curve. In a beta distribution this would be the lambda value.

Look at Fair-U for an example. I quite like the FAIR method, but there are others. Just don't accept less than a proper quantitative (sometimes called statistical) approach to solve these more tricky risk scenarios.

Tom
  • 10,124
  • 18
  • 51
7

tl;dr: When it comes to a security vuln, high enough impact means it should be mitigated, no matter how low the probability. On top of that, that probability is not static, it is constantly, perhaps radically, increasing.


There is a book The Black Swan: The Impact of the Highly Improbable by Nassim Taleb which pulls most of its examples from finance, but is a book about the problem you face where

  • The event is rare, potentially verging on never happening
  • If the event occurred the impact would be severe

The book presents an extensive philosophical argument about why simple/common risk assessment tools such as your "Multiply probability by impact" are ill suited to assessing the risks posed by such extreme cases and lead to underassessment of risk.

Taleb encourages the minimization of exposure to high negative impact events, even if the probability of an event is vanishingly small, the right move is mitigating a potential catastrophe.


I like Taleb's ideas, and I believe he also mentioned cyber security and how companies were underestimating cyber risk in his addendum to this book, but here's my take as somebody with at least a passing interest in security.

If your company is vulnerable to a threat that is being actively exploited in the wild, the probability of that attack being turned on your organization grows geometrically with time. Attacks don't go away, they just get more sophisticated and easier to detect and automate. What requires some hands on thinking and manipulation by a smart cracker today is only a few years away from being automatically detected and exploited by a bot. That technological jump could happen overnight and you're pwned the next day.

All this is to say that unlike some unique 0-day in your application, the probability of attack in your situation is not a fixed number, its growing. Moreover, that probability is not going to be something growing at a steady rate, it can and likely make rapid leaps and bounds.

  • 5
    I disagree on the tl;dr part. If the probability is ridiculously low, you may forego mitigation. For example, most sites do not take steps to mitigate against a meteorite impact. In addition, there are **context** parameters that come into play. For example, most companies have no mitigation against a thermonuclear war, not only because (thankfully) the probability is low, but also because if that happens, it really doesn't matter much if the company is affected. – Tom Jul 26 '18 at 13:48
  • 4
    But the likely-hood of thermonuclear war and meteorite impact don't grow in the same consistent way that a risk of a known hack does. Additionally, being hacked is one of those catastrophes that you have to care about the aftermath, because you're still alive. – Will Barnwell Jul 26 '18 at 18:15
  • Agreed. That is why I said "I disagree **with the dl;dr part**". The blanket statement as posted isn't true. The elaboration below is very good. – Tom Jul 27 '18 at 08:26
  • I'll edit it, because you're right and I don't like blanket statements either – Will Barnwell Jul 27 '18 at 17:11
  • 1
    I really think this is the right answer. The other answers talk about estimating probability of occurrence but there's no *effective* way to do so. – President James K. Polk Jul 28 '18 at 16:14
3
  • Regarding the possible damages, talk with some qualified lawyers about who could be held accountable and how. The risk to a company as a commercial entity may be capped at the net worth of the company, but there are some scenarios where CEOs or even employees could be personally liable in civil or criminal law if they should have known the problem. (Depending on your jurisdiction, of course. I'm thinking about violations of privacy rules to increase profits, for instance.)
  • Depending on how many IT staff you have and what they do, hiring another employee could vastly increase the bus factor of your department. You wisely did not explain what you're doing in a public forum, but having another admin might allow you to introduce a two-man-rule on more of your procedures, or deal with planned or unplanned absences, and so on. So it might be incorrect to say dealing with this risk costs 8% plus one employee. It would be more like for 8% plus one employee, we can safeguard against this and many other risks.
o.m.
  • 171
  • 4
3

Here's the deal:

If you only have your own data to protect, do whatever you want, but if you have someone elses data in your system that might be compromised, you will do one of the following:

  1. Protect the data with whatever is industry standard or recommendation.
  2. Inform your clients/customers/products that you are protecting the data they entrust you with by having faith that attackers will not attack your systems.

With this, you arrive at a simple conclusion: Protect your systems in such a way that you can tell your clients/customers/products the way you are protecting their data without having them stop being your clients/customers/products.

Any less than that and you're being deceitful towards your stakeholders (This may come as a surprise, but yes, your clients/customers/products whose data you are storing are stakeholders in your business - even regular people -, especially when it comes to the availability of that data to outsiders that those stakeholders might not want to have happen and you are responsible for making sure that data is adequately protected, lest you tell them you are using half measures in protecting their data when they submit it).

In addition to everything else: Do not be blinded by simple formulas that calculate your chances of being the target of an attack; The target of an attack is not chosen by random out of all the companies available for trying an attack on (naive statistical analysis would use this list to create the statistical chance of being attacked), but ends up being one (might also be ALL) from the list of companies vulnerable to the attack.

This can create an illusion that you don't have to bother protecting yourselves because the chance of being the target is so low, but in reality you're just putting yourselves in the risk group of vulnerable targets and you cannot possibly know your actual chances of being a target as companies do not exactly advertise their level of defenses (for obvious reasons). If this concept is hard to grasp, consider the following scenario: out of 100 companies, 1 company gets successfully attacked every year. Unbeknownst to you, 90 of these companies are using strong protections against attacks and attackers are simply leaving them alone after trying to attack them. Almost all of the successful attacks are against the remaining 10 companies who are using half measures. You as a newcomer to this scene will look at the statistics and determine that you don't need to protect your systems because there's only a 1/100 chance of your company being a target of an attack, when in reality your changes are 0 if you are using strong protections and 1 in 11 if you are using half measures. Your internal calculation of the risk would be complete rubbish.

TLDR; You cannot apply statistical analysis on whether you should protect your systems or not, because you do not have enough information to do so. You should do so in such a way that you can tell your stakeholders (that means making sure they understand it, not hiding the information from them) what you are doing to protect their data and they won't switch to another service provider.

pie
  • 31
  • 3
2

As an engineer who's spent significant time on safety-related work, you probably want to look into FMEA. FMEA applies the "multiplication" method you describe, but it groups ranges of effect/probability/detection rates before doing the multiply. These ranges are well-defined, so assuming you've got some idea of these probabilities, you can get a reliable RPN score out of the system.

Of course, you still need to justify spending the money! But at least you've formally identified your risks.

Graham
  • 581
  • 3
  • 7
1

I will give you a completely different answer.

I think you're assessing this completely wrong. I don't know how you came up with one risk/vulnerability requiring a fairly specific(and very large) monetary cost and FTE to manage, but this sounds like you are being sold on a specific tool to 'solve' a problem.

If the 8% cost is coming from knowing the cost of a tool's licensing, or the FTE is required to manage a tool (e.g. "our EDR guy")... I mean, there's no tool out there that will resolve a risk all on its own.

If the high cost is holding you back, rather than spending time trying to justify the risk as demanding the expenditure why not look into the root cause and how you can solve it with a lesser expenditure. EDR, DLP... there's no area of security(or vulnerability) that has one solution with one cost.

Angelo Schilling
  • 681
  • 3
  • 11
1

Industry approaches most of the ‘Risk Control Investment’ decisions with ‘Quantitative Analysis’ and the following are the standard way of approaching this analysis. The following table illustrates the matrices used during this decision-making process.

enter image description here

Following is just an example to show how to apply these formulas:

Let’s assume your company sells mobile phones online and has suffered many denial-of service (DoS) attacks. Your company makes an average $30,000 profit per week, and a typical DoS attack lowers sales by 50%. You suffer seven DoS attacks on average per year. A DoS-mitigation service is available for a subscription fee of $15,000/month. You have tested this service and believe it will mitigate the attacks. The question is whether worth going for this service and mitigate the risk or accepting the risk and do nothing?

Let’s do the analysis:

AV = $ 40,000

EF = 50%

SLE = AV*EF = $ 20,000

ARO = 7

ALE = SLE*ARO = $ 140,000

TCO (1 Year) = DDoS Subscription * 12 Months = $ 180,000

ROI (1 Year) = ALE – TCO = - $ 40,000 (Negative)

Since the ROI is negative (- $ 40,000 per year), you can recommend with facts on a decision of not to invest on the Risk Mitigation (this case Anti DDoS subscription) and to accept the risk.

Note: On practical scenario you may need to assess this with the impacts of other values as well (for example Brand Reputation Damage, etc).

Refer: https://resources.infosecinstitute.com/quantitative-risk-analysis/#gref

Hence, on your case if you can identify the AV, EF, ARO, etc, you will be able to support your decision (at least approximately) with facts.

Even though for very low probability (Let's say for Floor risk ARO is 0.001), the impact could be very high ($ 10,000,000) and the ALE would be considerably high. If your mitigation control cost is less than this, you can proceed with your mitigation control deployment with facts.

Sayan
  • 2,033
  • 1
  • 11
  • 21
-1

Simulation is a technique that can help build stronger intuitions about strategic questions where there is a significant amount of uncertainty.

In the context of the question, a simple way to get started is:

  1. Pick a set of distinct timeframes based on whatever the organization's decision making agility is. That is, if the org can make decisions on this issue on 3/6/12 month scopes, plan to run models for each of those timeframes.

  2. Engage with peers to collect data that allow one to produce data for probability of occurrence within a specific timeframe, cost of mitigation, cost of impact.

    One of the other answers describes a reasonable process to do this: get a list of peers, talk to them/study news to determine if/when they saw a similar attack, and learn what they spent to mitigate/avoid, or how much they spent on response and cost of impact.

    There will be inherent problems with this data, but that's ok. Simulation helps work through those issues.

  3. For each timeframe, make a spreadsheet that has:

    • range of costs of impact, from minimum to maximum, with 90% confidence (IOW- the expert has 90% confidence that cost of impact will be between the min and max)
    • probability of occurrence of the specific incident within the timeframe

    Use the data collected from peers to produce a range of probabilities, and ranges of costs of impact, and pick a specific probability and specific range.

  4. With that minimal data entered into a spreadsheet, use random number generation to produce 2 more cells:

    • does the issue actually occur
    • if it occurs, what financial impact does it have
  5. Doing that randomization once is a trial. Rewire the spreadsheet so that you can run 10,000 or 100,000 trials, and collect and graph all of the results from those trials.

  6. Repeat with different probabilities and impact ranges that reflect different interpretations of the peer collected data, and repeat for different timeframes.

At a certain point one just has to make a call, but deliberate simulation with visualization can illuminate assumptions and implications in ways that continually going around the room in discussion mode cannot.

This and much more sophicated simulation techniques for risk assessment in compsec are covered in a great book:

How to Measure Anything in Cybersecurity Risk

https://www.amazon.com/How-Measure-Anything-Cybersecurity-Risk-ebook/dp/B01J4XYM16/ https://www.howtomeasureanything.com/cybersecurity/

Jonah Benton
  • 3,359
  • 12
  • 20
  • This amounts to a link-only answer: "read the book". Can you expand on *how* to build the models and perform the simulations? – schroeder Jul 25 '18 at 08:27
-1

• Do you not have the alternative of just (i.e. only) having a recovery plan and process in case the attack in question does happen — a backup, so to speak?

Having said that… from your wording it looks as though you can suffer an attack of this type without even knowing it, unless you spend money just to be able to detect it. …So maybe the foregoing is a void concept.

• More subjectively, it looks as though this has that standard (and incredibly annoying) feature of difficult decisions, being that it involves unknowns. If you knew X (in this case, that an attack would/not happen), you would certainly do Y (in this case, spend lots of money on a counter, or not). Sometimes it can be helpful to acknowledge that you really do not have the information you need to inform which way you jump; then, you toss a coin and jump… as opposed to spending huge amounts of energy wringing your hands. (In extreme situations, human beings tend to behave on the basis that the eventuality will be what they would hope for, however unlikely it might be.)

There is value, for your staff, in just having a person to take the responsibility for the dark unknown.

I would like to give a nod to Will Barnwell’s answer (and the cited Nassim Taleb). Meaning absolutely no insult to “extensive philosophical argument//”… such are not necessarily correct. The issue for me is that it is easy to imagine that there might be two to three of these beasties within a few years, meaning that the cost of counter-measures rises from significant to crippling. Independently… if the risk can reasonably be expected to grow, then that changes the (given) parameters of the decision.

On the one side, we have the cost of the attack. As Philipp has observed, there is a maximum set by the value of the company; it would be misguided to spend $10m reconstructing a company that is worth only $1m. Unfortunately, this ignores the combined cost for the company’s customers. Whereas it is not sensible for a $1m business to value a potential problem at more than its own value, it remains true that the significance of the issue could easily be (say) $50m.

On the other side, there is the size of the risk. Again, an accountant (so to speak) might simply find out how many companies there are that are vulnerable to the attack type in question, and how many of these have been hit in each of the last few years, and directly derive a likelihood of any one company being hit in a given year. A slightly more sophisticated version would include making an estimate of how many were hit and managed to keep it a secret — if this is reasonably possible. A yet more sophisticated version would work out how many of these companies were worth attacking… which may or may not include the question of whether or not they individually have taken counter-measures… except that that would be a secret… and is, to some degree of likelihood, known to an attacker.

Then there is the possibility that, on the one side, a clever, cheap defence might become available and, on the other side, that the attackers might be more or less motivated or numerous in five years, or might find a vulnerability in a subset of the possible targets, and so on. Further, it may come to light that some companies are being attacked in this or a similar way, and do not know it (and some might find out after the event, if ever).

Further, there is the cumulative risk. It is all very well to calculate the risk as an annual figure, if the risk is reasonably high and the question is about how much to spend on preventing and mitigating it. If, conversely, the effect would be apt to take out the entire company, then an annualised figure is of less significance.

The OP said that this type of attack is rare, but does happen. The whole reason OP has put up a question on this site is that it is not true that the risk is vanishingly small, and it is not true that the risk is high enough to take for granted.

Let us imagine that the company decides to do nothing. Let us assume, further, that this is objectively reasonable (to an ideal observer). If it is hit in a few years, the cost will be devastating for not only the company but indeed many of its customers (I take it). There is a risk of this, that is not vanishingly, and possibly not trivially, small.

Let us imagine, conversely, that the company decides to spend significant amounts of money on preventing and mitigating this attack, and again that this is reasonable. If it is not hit, ever, the cost (of prevention) is significant to very significant. There was a risk of being attacked, but it was very small, and the money looks very much like a significant loss.

Finally, there is the possible quasi-catch-22 that the attackers will see, and look elsewhere, if it has countermeasures, and the converse.

I might also note that receiving an insurance payout that is (say) equal to the (accounting) value of the business is not apt to fix the problem if the lifeblood of the business — data — is lost or otherwise compromised.

schroeder
  • 123,438
  • 55
  • 284
  • 319
Carsogrin
  • 11
  • 1
  • That is surely something you can do, when you are on a fieldtrip together, and you are unsure about what hostel to stay in, but this is definitely no sound business advice. – Tom K. Jul 25 '18 at 08:36
  • @Tom K.: I was thinking of a maritime explorer vibe, with rumoured huge monsters and the possibility of sailing off the edge; it is much more romantic. – Carsogrin Jul 27 '18 at 05:56
  • Please do not punctuate your answer with complaints and comments to others – schroeder Jul 27 '18 at 18:44
-2

If we re-phrase the problem slightly to state that if this occurs (and it seems like it might), you will be out of a job and to mitigate this problem means increasing the companies IT spend slightly, then I think the answer becomes a bit easier to come up with.

i.e. if I don't do this, and it occurs, I will be out of a job (possibly with reputation in tatters). If you do something about it, your company will have to pay a little more money.

Seems like a bit of a no-brainer to me.

Paddy
  • 177
  • 4