1

There are several articles about XSS vulnerabilities in Android/iOS WebViews. By WebView I mean the 'real' webview not SFSafariViewController or Chrome Custom Tabs.

I understand the main concept of XSS. An example of Client XSS could be: redirecting someone to a URL with a manipulated query string in it. But how would someone start an XSS attack on a WebView?

I can think of one example myself: The app makes use of a deeplink/universal link. With the help of this universal link the app opens and an intent will load the requested page. When a user then clicks on a universal link like https://example.com/openpage/bar?query=<script>alert('XSS');</script> and the developer was really lazy this could result in XSS. But this is pretty easy to counter, and thus is nothing to get too scared about.

Of course server XSS attacks are also possible, but I mean client XSS.

Can you think of any other ways to exploit XSS? Because if the one I mentioned is the only one, I think the risk of an XSS attack on a WebView is minimal.

Tafel
  • 111
  • 1
  • 3
  • Have a look at the [tweetdeck worm](https://www.google.com/search?q=tweetdeck%20worm). – Anders Jun 24 '19 at 08:32
  • You seem to think about WebViews as static html rendering engines. Indeed, they may contain (depending on the app, cordova-apps are completely HTML5, for example) a full blown mobile web app with user-supplied input. – Tobi Nary Jun 24 '19 at 12:30
  • Thanks @Anders. That's actually interesting. But that is more a Server XSS attack. I am specifically looking for Client XSS vulnerabilities in WebViews. For example in a 'traditional' browser Eve could trick Bob into writing malicious code in the console. I don't think a similar attack is possible on a WebView. But I wanted to explore if anyone knows of similar attacks. – Tafel Jun 24 '19 at 14:59
  • It's stored XSS. Could still be client though. Writing code in console is self XSS. That is not an issue with a WebView. But self XSS is a very small subset of all XSS. – Anders Jun 24 '19 at 17:53

1 Answers1

1

Let's clear something up first:

I understand the main concept of XSS; on a regular website it could be achieved by for example redirecting someone to a URL with a manipulated query string in it.

No, XSS is when an attacker can inject code into client side code which is executed client side. The injected code can show an overlay (login screen for phishing purposes), perform requests on behalf of the user (call API endpoints to perform XYZ actions), redirect the user to malicious website etc...

The delivery point can vary. Using the query string in a URL is considered a reflected XSS. But say for example you're able to deliver an XSS payload via a comment on a forum, that's stored/persistent XSS. Check out the OWASP XSS page for more information.

The above explanation is usually explained in the context of a browser. Now WebViews are essentially "embedded browsers" in a mobile application. The above attack scenarios apply to WebViews as well. There's a small variation though!

First, in the context of Android and iOS, there are two types of WebViews: WebViews that essentially launch a new browser (SFSafariViewController on iOS, Chrome Custom Tabs on Android) and traditional WebViews. As far as I understand, the former is more isolated (process/permission wise) than the latter. Accessing "app data" in non-traditional WebViews is therefore harder since it requires sandbox escape exploits. For reference, each app inside Android/iOS have their own user/group ID. The app data for each app can only be accessed by the app itself (sandbox restriction).

Second, WebViews can be hardened by disabling JavaScript entirely. Most interesting XSS payloads are delivered via JS. HTML/CSS payloads can still redirect or change the layout of the webpage though!

Third, since WebViews are part of the app, it means that they can access own local app data via file:// handler. The problem? Browsers and therefore WebViews disallow Cross Origin requests. Meaning that a website served on example.com, cannot make requests to www.evilwebsite.com or file:// unless some CORS headers are set. There's one edge case though: sometimes mobile apps download a webpage via a custom HTTP client, save it locally and then open the saved webpage in a WebView. Now we don't have a Cross Origin issue, and it's possible to access app data. This is however one of those cases where the moon and stars need to align:

  1. Successfully deliver a XSS payload
  2. Vulnerable webpage is used by the mobile app
  3. The mobile app programmatically downloads the page, saves it and displays it in a WebView (cross origin ✅)
  4. Mobile app WebView is configured to allow JavaScript (in Android for example, you need to explicitly enable JS)
  5. Interesting (unencrypted) data needs to be available in app data folder (such as shared preferences file) in order to make this attack viable in the first place.
Anders
  • 64,406
  • 24
  • 178
  • 215
HamZa
  • 1,370
  • 1
  • 15
  • 19
  • Thanks for your help. I should have been clearer about the 'I understand the main concept of XSS' part. I meant that as an example of client XSS. I know there are several other types possible. Now I know that, for a WebView, server XSS is possible, just like any other webpage. But the thing that I do not understand is, Client XSS in WebViews. Other than the example I gave and the one you gave, I do not really see a lot of potential there. So does that mean that (from a Client XSS perspective) WebViews are actually quite well defended against XSS? – Tafel Jun 24 '19 at 14:45
  • Also I meant the full WebView kind, not the Custom Tabs or SFSafariViewController. Sorry about that. – Tafel Jun 24 '19 at 14:45
  • @Tafel there's not much "additional" potential in WebViews than what you already can do in browsers. The only potential attack vector i see is the exfiltration of local app data in the edge case I described above. One nice bonus point is the fact that developers can disable JS, while in a browser, the client needs to disable JS. This however does not stop html/css injection. Not to mention that developers need to be aware of this and disable JS whenever it's not needed. – HamZa Jun 24 '19 at 16:08
  • The reason I asked this question because I have a WebView in which I have to store a secret (for OAuth 2). I thought, when I only have to mitigate Server XSS (provided the edge case you and the edge case I described are mitigated), it might be possible to store the secret in a cookie. It is obviously not completely safe there, but I think when you make a risk analysis it is justified to do it like this. It is maybe a bit out of scope for this question, but what do you think? – Tafel Jun 26 '19 at 13:38
  • @Tafel i need more information in order to wrap my head around this. OAuth2 has several different flows, not to mention one can mess up the implementation. IIRC PKCE with dynamic secret is usually used for native mobile apps. See: https://security.stackexchange.com/questions/175465/what-is-pkce-actually-protecting – HamZa Jun 26 '19 at 16:56