1

Would it be possible to have a javascript observatory and stricter CSP (Content Security Policy) and implement it for the Tor Browser? A javascript observatory should work similar to EFF's SSL observatory, it should observe javascript and check if it is an exploit or XSS code and block it, instead of the allow all or nothing from domain X approach used by NoScript. NoScript XSS filter is flawed. NoScript offers no protection against trusted website servers that get hacked.

Tor Browser only implements a very limited set of Content Security Policy, it does not allow blocking XSS and other malicious javascripts using Content Security Policy rules like script-src 'none'.

Stricter sandboxing (for Windows), could prevent exploit code to access APIs that give access to username, computer name, MAC address, hostname, open connections to arbitrary IP addresses, etc. As clearly done by the FBI: https://twitter.com/jonathanmayer/status/621100179345686528/photo/1?ref_src=twsrc%5Etfw

Could someone explain parts of the FBI's Firefox 0-day?

Anand Bhat
  • 121
  • 5
Dojan
  • 21
  • 1

2 Answers2

1

The SSL Observatory works by sending the SSL certificate to a 3rd party to confirm that it is valid. Doing the same for Javascript has two problems.

  1. The first priorities of TOR is anonymity and privacy. If the TOR browser would send all Javascript to a 3rd party to validate it, that 3rd party would be able to create a detaile usage profile. Sure, there might be ways to work around this problem, but if anything it will increase the attack surface to deanonymize users.
  2. You can not automatically scan Javascript for maliciousness. There are just too many ways to hide bad behavior. If that would be so easy, there would be browser plugins to do so (in fact there are, but their detect rates are abysmal). All you could do is maintain a whitelist of known harmless Javascript code. But that would require a team of security specialists to check each script manually. It might be worth the effort for some very frequented mainstream websites. But these are usually not the kind of websites the average TOR user is interested in.

Stricter sandboxing to prevent Javascript from doing things its not supposed to do is of course always a priority. None of those things you mentioned should be possible according the the Javascript specifications, so if any of that is possible, it's a bug which should be fixed in Firefox mainline and not just in TOR browser.

Philipp
  • 48,867
  • 8
  • 127
  • 157
0

Would it be possible to have a javascript observatory and stricter CSP (Content Security Policy)... similar to EFF's SSL observatory

I think you have to first decide what kind of attacks you want to protect against. Will the attacker only be able to replace content of third party sites included in the main site (i.e. jquery.org) or will (s)he be able to modify the main site too?

If the attacker is only able to compromise third party sides a strict CSP and something like a JavaScript observatory might work, because you either forbid inclusion of third party script at all or restrict it to known good script (see also hashes for script-src which are implemented in Chrome 46).

But if the attacker is also able to modify content of the main site then this approach will probably only help for the few sites which have no inline-script, no user-dependent script and where the script changes only rarely. In practice JavaScript gets far more often changed than a certificate and you often have cases where the JavaScript served to a user is tailored to the user, i.e. depends on the session, account etc. This is not only true for inline script but also for script included with the script tag. Also you have lots of small script fragments included with onXXX attibute (i.e. onclick="dothis();") where not only the script code itself is relevant but also where this code is used in the HTML. Thus an observatory would not help for all but a few mainly static sites.

NoScript XSS filter is flawed.

All XSS filters are limited as are all antivirus solutions. It is impossible to reliably distinguish between good and bad with the limited information these filters have about what good and bad for a specific site might be. Either they block too much (false positives) or they miss something (false negatives). In the first case the user will switch off the filter and in the latter case it will not offer full protection.

Stricter sandboxing (for Windows), could prevent exploit code to access APIs that give access to username, computer name, MAC address, hostname, open connections to arbitrary IP addresses, etc.

Yes, I would not consider the current security architecture of Firefox optimal and I think Chrome provides a better protection against exploits which try to break out of the browser. On the other hand as far as I know Chrome does not provide the API you need for the full checks done by NoScript (i.e. XSS prevention, clickjacking detection...).

Another approach would be to run the OS with the browser itself inside some virtual machine so that the attacker would not only to break out of the browser but also out of the VM too to gain the relevant information. Each level of separation makes it harder for the attacker.

Could someone explain parts of the FBI's Firefox 0-day?

From the public available information it looks like they used Javascript to trigger a bug in the browser to inject shell code which then could do anything with the permissions of the user, including accessing hostname and MAC and sending this information to the attackers server. I cannot see from the public information which site they compromised and if the site was depending on script, if it used inline script etc. Thus it is unknown if your approach with the observatory and CSP would have protected the user in this case.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • "sites which have no inline-script, no user-dependent script and where the script changes only rarely" For inline scripts, CSP describes that "Individual inline scripts and stylesheets may be whitelisted via nonces (as described in §4.2.4 Valid Nonces) and hashes (as described in §4.2.5 Valid Hashes)." If there are no user-dependent scripts or when the trusted user dependent scripts can be distinguished from potentially injected malicious scripts with regular expressions (because they are different) you could protect or - when unsure - warn the user when a significant script change happens. – Dojan Sep 06 '15 at 13:40
  • "Another approach would be to run the OS with the browser itself inside some virtual machine so that the attacker would not only to break out of the browser but also out of the VM too to gain the relevant information." In security, the user is often the weakest link, so it would be better if stricter sandboxing would be built-in, so the protection is for everyone and not just those who use things like VMs, EMET and Tails. – Dojan Sep 06 '15 at 14:04
  • @Dojan: first, nonces are not widely implemented yet. Then, for user-specific script the nonces from one user are different from the other user and thus you cannot have some global observatory for these. And you cannot rely on the site itself because the attacker might have compromised it and will change the nonces accordingly. Also for scripts which change often warnings to the user will have the only effect that the user disables the warnings - because what could he do to verify what is right? – Steffen Ullrich Sep 06 '15 at 15:30
  • One of the reason nonces are not widely implement is, because they are relatively new and some browsers don't support them. CSP is primarily intended for attacks like XSS, where the attacker can inject code, but can't edit the source code of the site and its loaded resources. Nonces suppose to be same for every script element on the webpage after the page has loaded, but different every single time the page loads. The idea is that if you can inject javascript using a XSS vulnerability, but can't change the source code, your injected javascript won't execute, because of random nonce mismatch. – Dojan Sep 06 '15 at 17:09
  • Nonces are in HTML, if you have some global observatory, it would be more useful to look at the javascript. It would indeed be more effective when scripts don't change often (in an unpredictable way). As for warnings, that is a classical security problem, see for example " [Google redesigns security warnings after 70% of Chrome users ignore them](https://nakedsecurity.sophos.com/2015/02/03/google-redesigns-security-warnings-after-70-of-chrome-users-ignore-them/)" But at least, it won't go unnoticed, now you might get exploited and everything looks fine, because the site work as they always do – Dojan Sep 06 '15 at 17:18
  • "It would indeed be more effective when scripts don't change often..." - That's what I tried to point out in my answer: your idea depends on the sites being less dynamic with the script they serve. – Steffen Ullrich Sep 06 '15 at 17:34
  • I think there are almost none sites out there that are written from scratch every week or so, most sites stay more or less the same for months and years with some updates once in a while. I don't think it will stop working when sites are more dynamic. The question will be more, when there's a change, how do you see the difference between a good change and a malicious change, there are a lot of different ways to detect based on heuristics, for example, for exploits, EMET, [A3](http://bit.ly/1ubdpUJ),etc.. And what the balance between user-friendliness and the level of protection you want is. – Dojan Sep 06 '15 at 19:28
  • @Dojan: your are right that sites does not get written from scratch so often. But they get tweaked lots of time, i.e. you have lots of small changes over time and each of these changes might cause a warning. It would probably possible for somebody who pwned the site to introduce some malicious code without notice. – Steffen Ullrich Sep 06 '15 at 19:38