8

Are there any statistics on how reliable security patches are? Such as the fraction recalled or corrected?


Part of keeping a computer secure is applying security patches to it. The period between a patch being made available and installing the patch has a heightened risk of a compromise, since hackers have been tipped off about a vulnerability. If your only concern is security, you should therefore install all security patches, and install them as soon as possible.

Yet I know of professional (in the sense that they are paid to do the job) system administrators who do not install security patches because, they say, they are concerned that installing the patches will "break" their system in some way.

It easy to decry them as foolish. But a more nuanced analysis notes that their job is not simply to keep a computer system secure. The system has a business task to do, and a security breach is only one of several failures they must worry about. A rational approach takes into account the cost of each failure mode, the cost of protections against the failure, and its likelihood of occurring. Not installing patches can be a rational decision, at least theoretically, in some circumstances. More reasonably, delaying installation of a patch while waiting to see if it has problems could be rational in more circumstances.

However, for such decisions to be rational, the probability of a security patch breaking a business application must be known, and moderately high. Otherwise the given reason is more an excuse for laziness.

Just how likely is a security patch for an operating system component or framework (such as a web server) to break a business application running on that platform. Are there any statistics at all on how likely a patch is to break something?

Now, nobody really does a mathematical calculation of expected-gain, but rather operates on some intuitions about relative risk. I suspect the system administrators have a flawed intuition about the likelihood of a patch breaking their system. As a programmer of business applications, I find it hard to believe a patch to an operating system component or framework that had been reasonably tested by the vendor could break the application, unless the application was badly written and riddled with errors that posed other business risks anyway. But how can we correct such faulty intuitions without some kind of statistics on faulty patches? Such as the fraction of patches recalled or corrected?

Raedwald
  • 518
  • 4
  • 12
  • `for such decisions to be rational, the probability of a security patch breaking a business application must be known` while true, doesn't really describe reality. Most business decisions combine intuition with rational thought as they must function with, sometimes severely, limited information – Neil Smithline Jun 04 '16 at 14:20
  • @Neil agreed. I've amended my question to tone down the emphasis on rational cost-benefit analysis. – Raedwald Jun 04 '16 at 14:35
  • 1
    "Just how likely is a security patch for an operating system component or framework to break a business application running on that platform ?" ... Seriously ?!?! You're basically asking the old "How long is a piece of string ? " question ! – Little Code Jun 04 '16 at 15:22
  • @Little Code No. It should be possible to measure what fraction of patches were recalled or amended. That is an upper bound on the likelihood. – Raedwald Jun 04 '16 at 15:32
  • @Raedwald No. There are simply too many factors to make any sort of sensible guess, which is what you are asking for, a guess, pure and simple. There are far too many variables to take any measurements. – Little Code Jun 04 '16 at 19:41
  • As this question is asking for statistics, it is the antithesis of a "primarily opinion based" question. – Raedwald Jun 05 '16 at 15:06
  • "statistics, it is the antithesis of a "primarily opinion based" question." .... you just don't get it do you ! There are no statistics on what you are asking for, there are no facts on what you're asking for, only opinions. – Little Code Jun 05 '16 at 20:27
  • @Little Code So you say. Perhaps someone is more knowledgeable than you. You are asserting that *nobody* has recorded withdrawn or corrected security patches. That is simply not credible. – Raedwald Jun 05 '16 at 22:22

2 Answers2

1

A more rational approach taken in same places, is not to avoid updates altogether, but rather to delay and test before applying. Big organizations like Microsoft, Apple, and Mozilla have put out bad updates in the past rendering devices or software unusable, or unusable behind a proxy/firewall, etc. It can be a good calculated risk to delay an update for a while to see if there are any problems that pop up soon after it is released, and then to apply the update to a few non-critical systems first to check for problems in the local environment, before a wider roll-out.

Ben
  • 3,846
  • 1
  • 9
  • 22
  • 1
    No, delay is rational only if the risk of immediately applying patches outweighs the risk of delaying. Pointing to examples of bad patches does not indicate it is rational. There will be examples where delay led to disaster, because the delay left a window of vulnerability open. Only estimates of likelihood can make it rational. – Raedwald Jun 04 '16 at 15:21
0

The patching strategy is part of a risk assessment for a company. Your specific case may range from "never apply security patches" to "apply security patches as soon as they are released".

The problem you describe is well known and extremely widespread. Not applying patches (security or others) is explained by various more or less rational reasons (avoiding downtime, avoiding breakage, "too complicated", ...). It is up to the person in charge of information security at your company to put the problem on the table. One of the correct outcomes may be "we will not apply patches".

Putting the issue on the table and formalizing it though a policy helps everyone: the security people are happy, the administrators have an ass-coverer policy backing up their activities in case things go south, internal audit can tick another checkmark and, ultimately, the company will benefit from that.

Of course it is not possible to answer your question. You may have horror stories of how bad it went, I will give you mine (spoiler: happy ending): in a very big company ~12 years ago IT was outsourced to a service company. This company installed their patch management system and forgot to disable it. 40,000 Windows machines were updated on the spot with patches dating back years ago. There was one day of tension since some other software required an update as well and then everything went fine. YMMV.

WoJ
  • 8,957
  • 2
  • 32
  • 51
  • A risk assessment that does not take into account the likelihood of the risks is no risk assessment at all. I am asking for some evidence of the likelihood of a risk (patch breaks system). You are seem to be saying a "risk assessment" can be done without an estimate of such likelihood. – Raedwald Jun 05 '16 at 18:03
  • The reality is that most of the "risk assessments" are done via magical hand waving without any hard numbers. Quantifying the likelihood is almost impossible (except from "almost zero chance" or "100% sure" - which is not helpful). Quantifying the impact is either quite simple (downtime = €/sec loss, or "if the financial server fails, we will not close the books, which means a fine of xxx €), or again impossible. The sad reality is that "risk assessment", particularly in IT/IS, relies almost completely on experience and gut-feeling. – WoJ Jun 06 '16 at 07:41