I work for a university, where I am part of the team responsible for integrating a SaaS Learning Management System (eg: Moodle, Canvas) with the rest of the university's systems.
Two months ago, I identified a CSRF attack, available to anyone who can add course materials to the system (mostly lecturers, but also other academic staff and tutors). The problem is that course materials can contain javascript, and any admin user viewing the course materials (eg: in response to a support request) will run the javascript. As the system now sports a (publicly documented!) REST API, it is possible for said javascript to, among other things, elevate any user to being a System Administrator. Notably, the web site itself uses the REST API throughout, which means that javascript must be able to make API calls. I've developed functional exploit demos, so I know for sure that it works, and how easy it is to do.
I reported it to the vendor, and their immediate response was 'Lecturers are trusted users, this is functioning as designed.' To which my irate response was 'I don't trust lecturers to be system administrators'. According to a simple DB query, we have some ~4000+ users in the system who could upload javascript materials, out of ~50000 users total. Two months later, they are still sticking to their original response.
I'm a little worried about my own institution's system: I've implemented some external monitoring scripts to tell me if any unexpected admins appear, and we do have offline backups, which is about all I can think of to do. However, I'm also bothered that this exploit affects all such installations all over the world, and it's a commonly-run system (probably >1000 instances).
As this is SaaS, we don't have enough access to the system to be able to block it ourselves.
So I have a group of related questions:
- Is this attack acceptable, or at least common among large enterprise systems?
- I believe wrapping user-generated content in an iframe with the sandbox attribute would block this attack (provided admins use up-to-date browsers), by using CORS to prevent requests to the REST API. Is this safe? Is it otherwise possible to make user-uploaded javascript safe?
- If the vendor refuses to fix it, what are my next moves? Is public disclosure recommended, given that it is fairly easy to exploit once it is known about?
EDIT: To answer the comments:
I admit I'm a bit hazy on the difference between csrf and xss, despite reading articles on it. By this definition it's XSS in that it 'execute in some way a script', but it's CSRF in that it has to 'use a victim's already logged cookie/session'. Also, the token that is key to the exploit is called a csrf token by the vendor.
Here's some more detail on how the exploit/mitigation works, per (2). Although you're right; this should really be a separate question.
The attack is to include a fetch
request in the supplied javascript that does a PATCH REST request against the appropriate API, in which case the victim's cookie is used as authentication for the API. Non-GET requests require a custom header (an 'xsrf' token), but it's possible for the supplied javascript to scrape that token from another page.
As for the proposed mitigation: course contents, including all user-supplied javascript, are currently wrapped in a single div
. I propose replacing the div
with an iframe
that fetches the user contents separately (or uses a srcdoc
), and has a sandbox attribute without the allow-same-origin
flag. That would mean the script wouldn't have access to the authentication cookie, and wouldn't be able to make fetch requests to the API anyway because of CORS.