2

Following the 2017 WannaCry attack against big institutions like the NHS or Telefónica did not apply the Microsoft security patches associated with EternalBlue and were forced to deal with a significant impact to their operations.

What's a reasonable patch management strategy to avoid having this kind of exposure and what are some of the more worrisome side-effects applying patches on a frequent basis would have for the infrastructure of a big institution like the NHS?

Ori
  • 2,757
  • 1
  • 15
  • 29
Quora Feans
  • 1,861
  • 1
  • 12
  • 20
  • 2
    Answers to this question would be purely opinion based and thus maybe offtopic here. Talking about opinions, the primary reason not having systems patched would revolve around not having a proper patch management in place. Does that fall under incompetence? Maybe the reason is funding to have a proper patch management for such a large organisation. That would be incompetence but on totally different level. – Marko Vodopija May 13 '17 at 13:35
  • @MarkoVodopija: fair enough, re-writting the question – Quora Feans May 13 '17 at 13:39
  • 3
    I think this question too broad. Just to highlight some tiny detail of patch management: in scientific or industrial environment there are often systems which control specific hardware. Patching these devices is usually only allowed once the vendor of the device has officially declared a patch as safe because such patches might cause change in timing behavior or have other unexpected side effects. Even outside such environment patches might give problems with installed software, drivers etc. – Steffen Ullrich May 13 '17 at 14:06
  • 1
    I agree with Steffen that this question is unfortunately still far too broad. Patch management policies are not one size fits all, and are highly environmentally specific, and as in his example, there are many places where patching is simply not an option at all. – Xander May 13 '17 at 14:09
  • Besides "patch management", it appears this virus/worm exploits server/system without firewall protecting ports 139 or 445. Best practice would be (I assume) to protect *ALL* services behind a firewall and use VPN or similar technology for outside access. Running bare ports will get in trouble every time. – markspace May 13 '17 at 21:03
  • The perimeter is a bit more porous than the corporate firewall. Coffee-shop attacks where users on VPNs are simultaneously exposed to internal network shares and public networks can expose a machine, while software firewalls tend to be confusing for end-users to administer when deciding on the security differences between a coffee shop, their home network and a customer's site. Mobile workstations may also suffer from poor update schedules from users who are returning from leave. There are ways around all this, but its expensive and a patchwork of tools. – mgjk May 14 '17 at 08:19
  • Complexity in patching is IMHO one of the strongest reasons for multiple 'zones' in a network If a box is difficult to patch, it should not be easy to reach on the network. – mgjk May 14 '17 at 08:24

4 Answers4

1

As with the patches for the vulnerabilities used by the ransomware, there was no negative impact described by Microsoft.

In a utopia, system administrators would be patching systems on a near daily basis. However, in the real world, it is the complete opposite.

Some updates may have to be reviewed, such that they do not negatively impact the productivity of an enterprise. While rare, some institutions consider down time, and the potential loss of work to be very significant.

Systems may not be connected to the network, or may not be connected to a corporate server that instructs for the deployment of certain updates. Laptops are notoriously known for being behind on patches because of their portable and unconnected nature.

Furthermore, some institutions may have patch cycles, such that a patch might be public, but not deployed for a month or more.

There was no valid excuse for the impact of the ransomware, as the patch and notice was public March 17th, but the vulnerabilities were taken advantage of almost 2 months later (May 12th).

You can read the security bulletin by Microsoft here: https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

dark_st3alth
  • 3,052
  • 8
  • 23
0

The answer is easy, apply (security and quality critical) patches as soon as available from the vendor, which implies you only use vendor supported platforms (this might not always easy to check, for example Ubuntu LTS provides security patches only for a very limited subset of packages.

For non-critical updates it depends on if you want to keep up or plan to redo the systems every 3 years anyway.

eckes
  • 962
  • 8
  • 19
  • Do you mean "only use vendor supported software/packages"? – Pierre B May 13 '17 at 18:49
  • Yes, at least with organizational support. I mean it depends on your core business, but if it is not IT you certainly don't want to maintain software yourself. – eckes May 13 '17 at 18:51
0

Sasser has been the gold standard for cautionary tales about patching. It's believed it was reverse engineered from the April monthly patch and released within a week.

That would be April 2004. The concept of automated patch exploit generation (http://bitblaze.cs.berkeley.edu/papers/apeg.pdf) is your worst case scenario for a released weaponized exploit. Wormable exploits on default enabled services have a long history on Windows, and organizations have had 13 years to get their acts together in terms of timely patching.

The time it took to bundle an already weaponized exploit is actually quite long. (roughly 2 months) Even still there's no excuse in many cases.

Microsoft also tests their patches all together (that is, no exceptions to non-optional patches) organizations use WSUS filters (typically) to restrict patching to critical/security patches only. This is another practice that I would highly discourage, this means you're running a set of patches that make your systems uniquely vulnerable to exploit chaining (http://www.informit.com/articles/article.aspx?p=1439194) as well as decreasing the difficulty an attacker has to elevate privileges after they inevitably get on a system.

If I were designing a patching policy, it would be enforced on all endpoints within 72 hours of release (gate logging in to user endpoint devices until you've patched with a login/startup script.) Further, it would be an "all" patch strategy that matches Microsoft's own quality assurance processes so I'm not subject to issues due incomplete patches.

Ori
  • 2,757
  • 1
  • 15
  • 29
0

Not directly related, but here is a story that could explain what patches may need to be reviewed before being applied in production environments. I had been working in a medium to large organization that used automated production tasks based on Microsoft Excel, and a large number of machines were involved in that production system. Nothing wrong till there. As we had security aware admins, every machine was running an antivirus, and antivirus signatures were update every day, during the night.

One night, the antivirus signature file contained a false detection that quarantined a DLL required by Microsoft Office... Nothing could be produced during the all morning, and production only restarted in the afternoon because it took less than half an hour to diagnose the problem, but half a day for the support team to restore the DLL on all machines. It could look not very important, but as the products contained weather forecasts, customers were not specialy pleased to not find them at the begining of the work day...

But that is not a reason to not apply a security patch at least on month after it has been made public.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84