There is a kind of race condition between the time when a developer releases a security update and the time when the security update is actually applied by the user. In the meantime, an attacker will be able to learn about the vulnerability and attack a system that hasn't been patched yet. Can this problem be avoided completely? If not, what are the best practices for mitigating it at much as possible?
This might not sound like a huge problem in some cases, for example when the patch is for a complex issue that is difficult to exploit, especially when the software is closed-source. In that case, an attacker will need several days to develop an exploit, and the user has enough time to update the system. But in other cases the effects of this race condition are actually disastrous. Imagine the following commit in an open-source project on GitHub:
- echo '<span>' . $username . '</span>'; // line removed
+ echo '<span>' . htmlspecialchars($username) . '</span>'; // line added
That's obviously a fix for a persistent XSS vulnerability. Even supposing that the security update is publicly released only an hour after that commit, and that the user is going to do the update only an hour after it's been released (which is totally unrealistic), this would result in a two-hour window for a vulnerability that even an inexperienced attacker would be able to exploit in one minute. For very popular software, it's not uncommon for automated attacks to start only a few hours after a security update has been released, or worse sometimes even shortly before the official release.
So what can be done to completely avoid or partially mitigate this race condition? What are the best practices used by vendors and customers who are serious about the security of their systems? Here are a few possible solutions that I thought of:
- Don't make the source code readily available, maybe by compiling or obfuscating it. This is what is often done for proprietary software. I suppose Microsoft relies on the fact that reverse-engineering their patches will take time, and in the meantime all users will be able to do the updates.
- Don't advertise security fixes. Just fix and release it, without telling anybody that you have fixed anything, or leave any hints anywhere. This might delay the attacks, but might also delay the updates because the users might think "it doesn't sound important, I'll update it later".
- Force automatic patches for security bugs. Systems will be patched automatically as soon as possible and before making any public announcements or giving any security advisories, only fixing security issues and nothing else to avoid breaking any functionalities. This sounds like a good idea, provided that all systems are patched almost at once, in a short time frame. Wordpress does something like this by default for its core, but I don't know how long it takes to update all the installations (there are millions of them).
- Rely on the services of a security company to stay ahead of the attackers. There are security companies that claim to monitor a variety of things just like an attacker would do (detect and investigate new attacks, check official advisories, even gather information from blackhat communities, etc.), and help you stay ahead of the attackers with special advisories, web firewalls, etc. This seems to be another popular solution, except it doesn't really solve the race problem, it's just trying to race faster. Also, this solution might help with popular software, but I guess it's hard to find services that can help with less common applications.
I can't think of anything else right now.