5

There is a kind of race condition between the time when a developer releases a security update and the time when the security update is actually applied by the user. In the meantime, an attacker will be able to learn about the vulnerability and attack a system that hasn't been patched yet. Can this problem be avoided completely? If not, what are the best practices for mitigating it at much as possible?

This might not sound like a huge problem in some cases, for example when the patch is for a complex issue that is difficult to exploit, especially when the software is closed-source. In that case, an attacker will need several days to develop an exploit, and the user has enough time to update the system. But in other cases the effects of this race condition are actually disastrous. Imagine the following commit in an open-source project on GitHub:

- echo '<span>' . $username . '</span>'; // line removed
+ echo '<span>' . htmlspecialchars($username) . '</span>'; // line added

That's obviously a fix for a persistent XSS vulnerability. Even supposing that the security update is publicly released only an hour after that commit, and that the user is going to do the update only an hour after it's been released (which is totally unrealistic), this would result in a two-hour window for a vulnerability that even an inexperienced attacker would be able to exploit in one minute. For very popular software, it's not uncommon for automated attacks to start only a few hours after a security update has been released, or worse sometimes even shortly before the official release.

So what can be done to completely avoid or partially mitigate this race condition? What are the best practices used by vendors and customers who are serious about the security of their systems? Here are a few possible solutions that I thought of:

  • Don't make the source code readily available, maybe by compiling or obfuscating it. This is what is often done for proprietary software. I suppose Microsoft relies on the fact that reverse-engineering their patches will take time, and in the meantime all users will be able to do the updates.
  • Don't advertise security fixes. Just fix and release it, without telling anybody that you have fixed anything, or leave any hints anywhere. This might delay the attacks, but might also delay the updates because the users might think "it doesn't sound important, I'll update it later".
  • Force automatic patches for security bugs. Systems will be patched automatically as soon as possible and before making any public announcements or giving any security advisories, only fixing security issues and nothing else to avoid breaking any functionalities. This sounds like a good idea, provided that all systems are patched almost at once, in a short time frame. Wordpress does something like this by default for its core, but I don't know how long it takes to update all the installations (there are millions of them).
  • Rely on the services of a security company to stay ahead of the attackers. There are security companies that claim to monitor a variety of things just like an attacker would do (detect and investigate new attacks, check official advisories, even gather information from blackhat communities, etc.), and help you stay ahead of the attackers with special advisories, web firewalls, etc. This seems to be another popular solution, except it doesn't really solve the race problem, it's just trying to race faster. Also, this solution might help with popular software, but I guess it's hard to find services that can help with less common applications.

I can't think of anything else right now.

reed
  • 15,398
  • 6
  • 43
  • 64
  • Here's another: adopt a certain frequency of releases, e.g. weekly, so that a possible security patch won't be highlighted – postoronnim Mar 04 '19 at 19:25

3 Answers3

2

This is where the concept of Defence in Depth comes into it's own. Yes, patch regularly and properly, but account for the fact that in any non-trivial system you will have vulnerable components.

Your first line of defence, as XSS is sometimes exploited via phishing, training your end-users about how to spot and avoid phishing attacks.

If the user had implemented a Web Application Firewall; then whilst the XSS may still exist, attempts to exploit it can be detected and blocked.

If the webserver is configured with standard same-origin policies, the likelihood of an xss being sucessful is reduced. Even if the WAF doesn't detect it.

If your software uses proper session management, and relies on e.g. re-authentication for critical transactions, then the impact of an XSS attack can be significantly reduced, even if it's successful.

If you ensure that data is properly encrypted, users have access only to that which they need, and that sensitive data is only stored when it's absolutely necessary, exploiting that xss vulnerability will cause less harm, even if they give an attacker full access to the vulnerable system.

JeffUK
  • 146
  • 4
1

Here are some defenses against reverse-engineering patches I've seen in the wild.


Publicly announce when a critical patch is going to be available.

Making sure that both attackers and your customers get access to the patch at the same time allow your customers to patch everything they need to patch before attackers have enough time to develop their exploits.

A famous example is Patch Tuesday, a strategy used by Windows to release patches at known dates.

Another example is Drupal: they let people learn about a highly critical patch one week before its release.


Release a single patch for everything.

This strategy is also used by Windows: by releasing a single patch every few weeks, it takes more effort to reverse-engineer every single change compared to a 1 line change in a security fix.

Benoit Esnard
  • 13,942
  • 7
  • 65
  • 65
1

In a well architected system, no one vulnerability should be enough to provide easy access to any attacker. For example perhaps a critical bug is discovered in PHP, but you are running mod_security in nginx, and you have cleaned up unreferenced files, and you have the minimum permissions necessary on your filesystem, and the database user can only run stored procedures, and your developers were already following the OWASP guidelines... Basically it's still bad and you still need to patch it, but because you have been following good security hygiene all along, you have some breathing room. You can maybe wait until a maintenance window to do the work, if it's going to be intrusive. An attacker would need to burn several 0-days just to get in, and more to perform lateral movement, ideally. You can't be perfectly secure but you can make the cost of attacking you higher.

The second thing is not to paint a target on your own back. Assuming you are going to be the victim of a targeted rather an a speculative attack - and modern watering hole/supply chain attacks make that increasingly likely - don't let your devs plaster the skills all over LinkedIn, use identities that can be trivially traced back to your company to post on forums about specific versions of specific technologies etc. In our hypothetical PHP scenario an attacker who is already looking at you and knows you use PHP would get an "aha!" moment as soon as that new vulnerability was announced and try it straight away.

Gaius
  • 810
  • 6
  • 7