2

I work for a university, where I am part of the team responsible for integrating a SaaS Learning Management System (eg: Moodle, Canvas) with the rest of the university's systems.

Two months ago, I identified a CSRF attack, available to anyone who can add course materials to the system (mostly lecturers, but also other academic staff and tutors). The problem is that course materials can contain javascript, and any admin user viewing the course materials (eg: in response to a support request) will run the javascript. As the system now sports a (publicly documented!) REST API, it is possible for said javascript to, among other things, elevate any user to being a System Administrator. Notably, the web site itself uses the REST API throughout, which means that javascript must be able to make API calls. I've developed functional exploit demos, so I know for sure that it works, and how easy it is to do.

I reported it to the vendor, and their immediate response was 'Lecturers are trusted users, this is functioning as designed.' To which my irate response was 'I don't trust lecturers to be system administrators'. According to a simple DB query, we have some ~4000+ users in the system who could upload javascript materials, out of ~50000 users total. Two months later, they are still sticking to their original response.

I'm a little worried about my own institution's system: I've implemented some external monitoring scripts to tell me if any unexpected admins appear, and we do have offline backups, which is about all I can think of to do. However, I'm also bothered that this exploit affects all such installations all over the world, and it's a commonly-run system (probably >1000 instances).

As this is SaaS, we don't have enough access to the system to be able to block it ourselves.

So I have a group of related questions:

  1. Is this attack acceptable, or at least common among large enterprise systems?
  2. I believe wrapping user-generated content in an iframe with the sandbox attribute would block this attack (provided admins use up-to-date browsers), by using CORS to prevent requests to the REST API. Is this safe? Is it otherwise possible to make user-uploaded javascript safe?
  3. If the vendor refuses to fix it, what are my next moves? Is public disclosure recommended, given that it is fairly easy to exploit once it is known about?

EDIT: To answer the comments:

I admit I'm a bit hazy on the difference between csrf and xss, despite reading articles on it. By this definition it's XSS in that it 'execute in some way a script', but it's CSRF in that it has to 'use a victim's already logged cookie/session'. Also, the token that is key to the exploit is called a csrf token by the vendor.

Here's some more detail on how the exploit/mitigation works, per (2). Although you're right; this should really be a separate question.

The attack is to include a fetch request in the supplied javascript that does a PATCH REST request against the appropriate API, in which case the victim's cookie is used as authentication for the API. Non-GET requests require a custom header (an 'xsrf' token), but it's possible for the supplied javascript to scrape that token from another page. As for the proposed mitigation: course contents, including all user-supplied javascript, are currently wrapped in a single div. I propose replacing the div with an iframe that fetches the user contents separately (or uses a srcdoc), and has a sandbox attribute without the allow-same-origin flag. That would mean the script wouldn't have access to the authentication cookie, and wouldn't be able to make fetch requests to the API anyway because of CORS.

Anders
  • 64,406
  • 24
  • 178
  • 215
  • 1
    "_Lecturers are trusted users_" then why don't they **all** have root access? (lol) – curiousguy Nov 28 '18 at 01:13
  • I might’ve missed something in the post, but how is this CSRF? Sounds more like broken access control to me. – securityOrange Nov 28 '18 at 04:48
  • 2
    The description does not sound like a CSRF attack. It sounds more like a stored XSS, i.e. someone can put Javascript in a code visited by others which then gets executed as the logged in user. And if elevating privileges could just be done with some Javascript it also sound like broken access control. – Steffen Ullrich Nov 28 '18 at 04:49
  • 1
    Apart from that I think your question is both too broad (too much different and only slightly related questions) and partly also a duplicate. But in short: 1. what you describe is not typical and acceptable 2. too few details known how the attack really and exactly works to determine if this would mitigate it 3. see [Disclosing a vulnerability when ignored by vendor](https://security.stackexchange.com/questions/198542/disclosing-a-vulnerability-when-ignored-by-vendor) and similar questions. – Steffen Ullrich Nov 28 '18 at 05:35
  • If you find yourself with a vulnerability where the vendor won't fix the issue, the best thing you can do is document your findings and run it up the chain of command. Either your school will deem it not important enough, or they may decide to stop using the software. Unfortunately there's a lot of poorly written software with security vulnerabilities, all you can do is make the decision makers aware and prepare for any potential fallout (sounds like your backups, and notification scripts are already in place). – Daisetsu Dec 03 '18 at 01:36

2 Answers2

2

Is this attack acceptable, or at least common among large enterprise systems?

No, this is absolutely not acceptable at all. And I do hope it is not common.

I believe wrapping user-generated content in an iframe with the sandbox attribute would block this attack (provided admins use up-to-date browsers), by using CORS to prevent requests to the REST API. Is this safe?

In theory a completely sandboxed iframe would help. But as you point out, support for the sandbox attribute is not perfect. It only takes one administrator with an old browser for you to be owned.

Is it otherwise possible to make user-uploaded javascript safe?

You could take a look at how sites like JS Fiddle solves this problem. They echo the user generated content back from a different domain, thereby levereging the browsers SOP for protection.

But if you are buying SaaS, it should not be your role to fix this.

If the vendor refuses to fix it, what are my next moves? Is public disclosure recommended, given that it is fairly easy to exploit once it is known about?

In your role as a customer, I would recommend you get the lawyers on the case. If your contract is any good, the company delivering the system is in breach of it. If you are paying for a product with different permission levels, you should make then deliver one. Currently, they are not.

In your role as a researcher who found a security vulnerability, I would recommend responsible disclosure. Among other things, you should give the vendor a clear deadline before you disclose.

I admit I'm a bit hazy on the difference between csrf and xss.

As others have pointed out, what you describe is stored XSS and not CSRF.

Anders
  • 64,406
  • 24
  • 178
  • 215
  • FYI, while they did eventually acknowledge it as a bug, they still (300 days later) haven't done anything more than a security advisory with a partial fix. It's a shame our contract is absolutely terrible (though apparently quite typical for LMSs). I raised it with my institution's head of IT security though, and he's pushing it through [CAUDIT](https://www.caudit.edu.au/), which is probably the most appropriate responsible disclosure avenue. I'm also continuing to agitate, for what little good it's doing me. It's certainly been an eye-opening experience. Thanks for the advice. – Amanda Ellaway Aug 13 '19 at 06:32
0

Similar discussions have been made here (My old job has massive security exploits in their product, but they dont care) and here (How to disclose a security vulnerability in an ethical fashion?) about ethical disclosure of security vulnerabilities.

Lecturers are trusted users, this is functioning as designed.

Did they completely understand your attack? My opinion is to try convincing them about the importance of the exploit that you found. If they continue to ignore you, you should publicly disclose it as described in the links above.

jimouris
  • 5
  • 1
  • 3