3

Secunia publishes an annual vulnerability review that covers web browsers and surprisingly Google Chrome leads the number of known vulnerabilities in comparison to the likes of Microsoft Internet Explorer and Mozilla Firefox.

Akin to AV Comparatives, are similar detailed reports available for all web browsers including Opera and Vivaldi?

The purpose is to keep track of the moving trends in browser security from an organization perspective.

Motivated
  • 1,493
  • 1
  • 14
  • 25
  • 6
    Please note that a longer list of known vulns doesn't mean that the piece of software is more, less, or equally secure than another piece of software. – Deer Hunter Feb 10 '16 at 08:39
  • 1
    @DeerHunter that is why he is looking for a more comprehensive report, something that shows the severity of the vulnerability, and how good the vendor was at fixing it in a timely manner, etc – Richie Frame Feb 10 '16 at 09:05
  • @RichieFrame - for all I know, Google has the most impressive security team. YMMV – Deer Hunter Feb 10 '16 at 09:06
  • 4
    What you would really need are information how often a browser including its plugins (i.e. flash) where used successfully as an attack vector, because only this includes how the browser is used, how fast patches reach the user , how good the sandbox is etc. But this would actually be way more work compared to just uselessly count some CVE, so probably nobody did it. – Steffen Ullrich Feb 10 '16 at 09:20
  • @DeerHunter I would agree, which is probably why it has the most reported vulnerabilities, because they fix them and don't hide it – Richie Frame Feb 10 '16 at 10:47
  • 1
    @SteffenUllrich I'm sure _somebody_ does it; there are plenty of security research firms out there that collect information like that. Good luck getting that report for free though. – Mike Ounsworth Feb 10 '16 at 17:49
  • Opera and Vivaldi are basically Chrome with a different user interface. They use the same rendering and javascript engine as Chrome. – Philipp Feb 10 '16 at 21:18
  • @DeerHunter - I am curious as to what you mean by `longer list of known vulns doesn't mean that the piece of software is more, less, or equally secure than another piece of software` given that a vulnerability is an indication of a possible exploit by a given threat or a set of threats. – Motivated Feb 11 '16 at 06:09
  • @Motivated - 1. Vuln severity and risk. 2. Time-to-fix. 3. Number of researchers looking for vulns, and as a corollary of (3) - 4. Number of vulnerabilities not yet found/published (potential/actual 0-days). **are all different** – Deer Hunter Feb 11 '16 at 07:04
  • @DeerHunter - So how does someone take these into consideration in evaluating the state of security of a given browser for example? – Motivated Feb 11 '16 at 07:08
  • @Motivated - holistically. There's no plug-yer-numbers weighted linear or non-linear formula. – Deer Hunter Feb 11 '16 at 07:12
  • @DeerHunter - Appreciate it being looked at holistically however where would one find all the relevant information to do so? – Motivated Feb 11 '16 at 07:13

1 Answers1

6

The trouble is that unfortunately there really isn't really any straightforward, objective, useful numeric measure of how "secure" one browser is compared to another. Or, really, how secure one piece of software is compared to another more generally.

Want to compare gross vulnerabilities reported & fixed during some period of time? Well, as the commenters pointed out there are several reasons you actually might not want to use total vulnerabilities as a representative measure of the practical security of a browser. Which means that comparing that number for one browser to that number for another is even more problematic. (For example, comparing Chrome's number of reported/fixed vulnerabilities to any other browser's.)

Well then, what about the number of vulnerabilities successfully exploited in the wild over some length of time. Well, that's perhaps a less-bad measure than total vulnerabilities in some ways, but it still has substantial problems that make it quite flawed as a basis for comparison. Browser A may have more exploited vulnerabilities than Browser B, but what if that's because Browser A garnered more attention from the security researchers, criminals, and/or nation-states who created exploits for vulnerabilities than Browser B did? Which, in turn, could be due to any number of reasons not related a browser's "inherent" security or lack thereof. (A real world example: it's likely true that one reason IE has historically drawn more exploitation attention than Firefox is because it is more heavily used within corporate and governmental sectors, which , of course, is where the greatest number of systems that are most attractive for attackers generally are.)

Or you could look at, say, the number of in-the-wild zero-day exploits (ie. vulnerabilities that became known & exploited before a vendor could issue a patch) that bedeviled a browser over a certain timeframe. But that measure has issues similar to those just discussed. And do you count exploits that allowed breach of a browser's site-rendering element but don't allow escape from that browser's sandbox?

Or you could look at...

But I think you're getting my point. There really isn't any one number that really captures a browser's susceptibility to compromise well or allows good comparisons on that basis. Which is essentially what an effort to compare announced vulnerabilities is trying to get at.

So, does that mean we simply can't compare security among browsers at all? No, I certainly wouldn't go that far. There are points of comparison that, if more abstract than numerical measures, are valuable. For instance, IMHO the architectural & defense-in-depth attributes of some browsers give them some significant advantages in security over others. (Though we mustn't forget that how a browser is configured and used is perhaps an even more important factor than the software's security architecture.). But straight numbers vs. numbers comparisons can be quite misleading.

mostlyinformed
  • 2,715
  • 16
  • 38
  • Thanks. I do agree with the point that numbers vs numbers can be quite misleading however they can be powerful in conveying messages especially where there is a demand to know the susceptibility to vulnerabilities. I imagine vulnerabilities are indications of possible exploitations. They may or may not happen however the threat exists. It's identifying the mitigations to minimize risks. So if we capture the vulnerabilities + the rate at which they are exploited + the rate at which they are resolved + the threats + etc, wouldn't that be a way to determine a level of security? – Motivated Feb 11 '16 at 06:07
  • Well, I definitely think you could aggregate those raw numbers together in new ways that would prove *more useful* in terms of helping us understand browser security more than those raw numbers do. I'm sure that you're right about that. But I think it's important to acknowledge that *more useful than the current stats * is all we can really ever hope to achieve in such efforts. Because there are so many abstract complexities and even value judgments involved when we talk about browser security that can't really be quantified precisely; you're never going to get down to **definitive** numbers. – mostlyinformed Feb 11 '16 at 09:34
  • Thanks. In your view, if you were asked to provide an opinion on the state of security of browsers in your organization, what approach would you take with the understanding that you also want to track the trends of the past, current to help understand the future (albeit it's a stretch). If given the choice of dumping data into machine learning systems, what would the path be? – Motivated Feb 11 '16 at 16:48