38

I understand that many open-source projects request vulnerabilities not to be disclosed on their public bug tracker but rather by privately contacting the project's security team, to prevent disclosing the bug before a fix is available. That makes perfect sense.

However, since the code repository of many open-source projects is public, won't fixing the bug in the source code immediately disclose it?

What measures (if any) are taken by open-source projects (e.g. the Linux kernel) to ensure that fixes for security vulnerabilities can be deployed to the end user (e.g. a Samsung Android phone) before the vulnerability is disclosed?

Heinzi
  • 2,914
  • 2
  • 21
  • 25
  • 2
    How do you deploy open-source code without revealing it? I'm not sure what you want is possible the way you are thinking it. Fixing something that's public discloses what was fixed... – schroeder Nov 11 '20 at 09:25
  • @schroeder: Yeah, that's what I thought, too, but just today I read that [a Chrome vulnerability has been fixed, but details for the bug are not released yet](https://www.heise.de/news/Google-Chrome-Neue-Browserversion-schliesst-Sicherheitsluecke-4952920.html), which seems weird to me, since Chromium is open source, so details should be easily obtainable by looking at the commit history. Since I'm not involved in any big open-source projects myself, I thought that maybe I am missing something... – Heinzi Nov 11 '20 at 09:41
  • 9
    There's a big difference between posting human-readable details and explanations, and updating code. I just think that you are interpreting their words too technically specifically. – schroeder Nov 11 '20 at 09:55
  • The link you provided explains all this, actually. – schroeder Nov 11 '20 at 10:03
  • @schroeder: The link explains why they didn't release details, which makes sense. They do not talk about how this is handled w.r.t. to the public repo. Google (the company) could, for example, fix the Chrome bug on a local dev's machine, build and release it, and only 1-2 days later push the fix to the public Chromium repo. Or they could deliberately avoid mentioning the security issue in the commit message to make it (a bit) harder to find. I don't know if any of these things (or others) are done, hence my question. – Heinzi Nov 11 '20 at 10:15
  • They say that the code is available, but they will not post the ***details*** or explain the fix. If you can work it out from code, that's still always available. – schroeder Nov 11 '20 at 10:17
  • Of course there is a test/dev repo that they use to develop and test before release. That's standard and has nothing to do with hiding security-specific bugs. If they do not mention the security issue, they risk people not patching promptly (especially since the code discloses the issue that attackers could exploit). So, they need to release the code, announce the urgent issue, and "obscure" the issue by not explaining it straight away. – schroeder Nov 11 '20 at 10:19
  • @schroeder: I see. Your explanation makes sense, and if you feel confident enough, feel free to add a "They don't, because ..." answer. – Heinzi Nov 11 '20 at 10:29
  • @Heinzi The point is that they want to disclose the bug precisely when a fix is available. What they want to avoid is a situation in which attacker know what the bug is, but users cannot patch it yet. –  Nov 11 '20 at 10:48
  • 1
    @schroeder: "How do you deploy open-source code without revealing it?" – The overwhelming majority of users of open-source software never come into contact with the source. I am a programmer and geek, and yet, the Linux kernel on my phone magically appears over-the-air from Samsung, my browser update gets pushed by Google, and I have never ever even looked at the Darwin source code, let alone compiled it. It would be perfectly possible for Google to update my Chrome without disclosing the vulnerability. – Jörg W Mittag Nov 11 '20 at 20:39
  • In many cases, the vulnerabilities are disclosed shortly after they are fixed. This was the case with the Heartbleed bug (see https://en.wikipedia.org/wiki/Heartbleed). – mti2935 Nov 12 '20 at 03:34
  • If a patch defaults a variable from `false` to `true` then it would take a whole heck of a lot of knowledge to deduce that the patch fixes a bug which enabled users to click on the pixel at coordinates 236,954 18,248 times and be granted access to the Pentagon's database. – MonkeyZeus Nov 12 '20 at 21:01
  • While I think that this is a good question on an actual problem, releasing a fix as source code is not fundamentally different from releasing a fix in compiled form. – Carsten S Nov 13 '20 at 12:02
  • 3
    @JörgWMittag Wat? How does your personal disinterest in reviewing source code stop Mal Malicious, evil hacker extraordinaire, from watching changes to source code? – 8bittree Nov 13 '20 at 20:41

4 Answers4

55

They don't. By releasing code, they automatically "disclose" the issue to those who can reverse engineer the patch. But they can delay explaining or providing the details for easy consumption.

If they delay releasing the code, they force users to use known-vulnerable code.

If they release the code and do not announce it as a security fix, then users might not patch and end up running known-vulnerable code.

So, they fix the code, release it, announce a security fix so that people assign the appropriate urgency, but they can delay explaining all the details to make it a little harder for attackers to figure out how to exploit the vulnerability.

Is that effective?

To some degree, "security by obscurity" has a place in a strategy in order to buy some time. Since it costs nothing, and it can have some positive effect, it seems like an easy call to make.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 38
    An additional note that might be worth making is that _this applies to closed source software as well_. A source code patch is, by its nature, _easier_ to reverse engineer than a binary patch, but only to the extent that it gives _more obscurity_, not that it gives a qualitatively different level of security. Depending on the technologies involved, it might be trivial for an expert to spot e.g. a buffer overflow being patched in a compiled executable. – IMSoP Nov 11 '20 at 18:01
  • 4
    I wouldn't call that "trivial". It still takes effort to analyze how to execute the proper path to get to the vulnerable code and what inputs to use that exploit the vulnerability, sometimes significant effort. – SplashHit Nov 11 '20 at 20:10
  • 4
    "Since it costs nothing" - the cost is a reduced amount of peer reviewing. If you post a commit without explanation, less people can read the code and check if the code actually fixes all instances of the problem and does not introduce new problems. And introducing new bugs/vulnerabilities by providing a quick fix which only a handful people reviewed has happened in the past. – Falco Nov 12 '20 at 09:49
20

The same way they prevent disclosing the report: by not disclosing it.

Since you mentioned the Linux kernel specifically: only a vanishingly small number of users build their kernels directly from the master branch of Linus Torvald's Git repository. The vast majority of users simply use whatever kernel their distribution's automatic updater installs.

In turn, the vast majority of distributions don't build their kernels directly from the master branch of Linus Torvald's Git repository either. They use some official release version as the base, backport some new features and fixes from newer kernels, integrate some third-party patches that are not part of Linus's repository, integrate some distribution-specific patches, etc.

Since they integrate patches that are not part of Linus's repository anyway, it makes no difference to them to integrate just one more patch for the most recent vulnerability.

So, basically what happens is that the release of the patch is coordinated such that at the time the vulnerability is fixed in Linus's Git repository and publicly announced, patched kernel images are already pushed out to users from the distribution's update servers.

Note that generally, the Linux developers prefer to publish fixes as quickly as possible, but if you check out the chapter on Security Bugs in the Linux Kernel Documentation, you will find the paragraph on Coordination, which I think explains the specific process for the Linux Kernel pretty well, and is also representative of how some other large projects handle the issue:

Fixes for sensitive bugs […] may need to be coordinated with the private [linux-distros] mailing list so that distribution vendors are well prepared to issue a fixed kernel upon public disclosure of the upstream fix. Distros will need some time to test the proposed patch and will generally request at least a few days of embargo […]. When appropriate, the security team can assist with this coordination, or the reporter can include linux-distros from the start.

So, the trick is:

  • Coordinate with each other to make sure that all distribution vendors are ready to push out updates. This would, for example, also include something like Google coordinating with the downstream handset vendors, etc.
  • Release the fix and the information that it is a serious security vulnerability, but do not necessarily release the nature of the vulnerability. (A serious attacker will be able to figure it out from the patch, but you buy yourself a little bit of time.)

How much of this is done, and what the timeframes are, depends on the nature of the vulnerability. A remotely-exploitable privilege escalation will be treated differently than a DoS that can only be exploited by someone who is already logged into the machine locally.

In the best possible case, the experience of the end user will be that by the time the vulnerability becomes public, their computer will already have greeted them with a message informing them that the system has been rebooted overnight for the installation of a critical security update, and that's it.

Jörg W Mittag
  • 1,190
  • 7
  • 11
  • 2
    *"the release of the patch is coordinated such that [...] patched kernel images are already pushed out to users from the distribution's update servers."* I think you're saying that all well-known Linux vendors coordinate with each other to ensure security binary releases come out before the code is released. Can you attest to this personally, or do you have a source for this? – jpaugh Nov 11 '20 at 21:12
  • 5
    @jpaugh That's the whole point of the [oss-security distros mailing list](https://oss-security.openwall.org/wiki/mailing-lists/distros). That is a private mailing lists with representatives of all major Linux distros, BSD flavours etc., used to coordinate disclosure and patch releases. Occasionally, somebody messes up and discloses stuff too early (for example, by running a testing build on publicly visible infrastructure, or pushing a private branch with a patch to a public repository), but most of the time it works fairly well. – TooTea Nov 12 '20 at 09:31
17

Coincidentally I have a tab open about CVE-2020-17640 in the Eclipse Vert.x project where the product maintainers are discussing this exact issue!

Julien Viet 2020-09-28 13:07:31 EDT

So I just need to provide the details of the CVE to get one ?

If that is so, I don't get how that can remain confidential until we publish a fix.

Wayne Beaton 2020-09-28 23:03:45 EDT

Provide me with the details here and I'll assign a CVE, then you can wrap up your commit and push. I can delay pushing to the central authority [Mitre / NVD] for a day or so, but no longer.


Your question is

How do open-source projects prevent disclosing a bug while fixing it?

I think the answer is with difficulty.

Exactly as you say, you want to keep the details private until you have a patch ready, but there are a number of competing interests that make it difficult to keep information from going public.

  1. The bug will often be reported via a public bug tracker. You need a way to pull that off the public tracker, or mark it private.
  2. You need to give the details to Mitre in order to get a CVE number assigned. While I've never submitted a CVE myself, I assume Mitre will work with all parties involved to delay publication until an appropriate time.
  3. In the commit fixing the issue, you want to reference the CVE number, which is problematic if your project's source is hosted for example on a public git repo.

Personal anecdote:

I've noticed that a lot of projects keep CVE numbers out of commit messages, which is super frustrating for me when I'm trying to decide if a given CVE is a "Take the server down until we can patch", or a "We're good to wait until next cycle". CVE-2020-17640 is one of those that's rated CVSS 9.8, but there's literally no info available to help me determine if this will be exploitable in the deployment I'm investigating.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • 2
    As I understand it: You can request a CVE Id where you give enough details so that the organization (not necessarily MITRE, but for simplicity let's assume) can decide whether the issue deserves an ID. When that's done the ID is marked as "reserved" but not yet published. A CVE must reference public information, so you have to publish that information separately and then inform MITRE which will publish the CVE. So it's under your own control and not MITRE's when the CVE becomes public. Doesn't really match with the linked issue though so there might be some details there. – Voo Nov 12 '20 at 10:44
  • [Presentation from MITRE about the process](https://cve.mitre.org/CVEIDsAndHowToGetThem.pdf) that seems to mostly agree with my understanding. – Voo Nov 12 '20 at 10:48
-1

One way would be to write the patch in a less than obvious way, and use misleading comments and commit messages, possibly mixing the patch in with multiple other (functionality/stability) patches, documenting something as the vulnerability bugfix that is actually a functionality/stability patch and vice versa. Cleaning the situation up later would of course be recommendable.

rackandboneman
  • 975
  • 4
  • 9
  • 1
    [Security through obscurity](https://en.wikipedia.org/wiki/Security_through_obscurity)? – Peter Mortensen Nov 13 '20 at 18:38
  • To quote another answer: "To some degree, "security by obscurity" has a place in a strategy in order to buy some time". And that is exactly what I was expanding on. Tripping an attacker's leg will of course not stop them, but will take their momentum. – rackandboneman Nov 14 '20 at 19:59