16

I'm working for an ecommerce website written in C#.net (no CMS used, quite a lot of code) where security hasn't been a priority for a long time. My mission right now is to find and fix any XSS breaches. There is a lot of non-filtered data written directly in the rendered HTML.

What is my best strategy to cure the code without having to read every single page?

Anders
  • 64,406
  • 24
  • 178
  • 215
kyori
  • 215
  • 2
  • 7
  • 4
    VEry difficult to give specific advice without knowing more about the application. You could start with BURP or OWASP ZAP to conduct some automatic scans. It may be worth manually checking key areas of access, such as login, registration etc. It may also be worth checking through server logs and seeing which areas on the site are commonly used, and starting to mitigate those entry points. If it's a large application it will take some serious time. Good luck – ISMSDEV Oct 16 '17 at 08:56
  • Yes i understand that my question isn't very precise. Just wanted to know if there is some "miracle well known solution" wich i wasn't aware of. If there is no such method, i will indeed start looking manually or semi-manually through the application. Thanks for the answer. – kyori Oct 16 '17 at 09:16
  • 5
    A solution without touching the source code would be a [web application firewall](https://en.wikipedia.org/wiki/Web_application_firewall). – xehpuk Oct 16 '17 at 17:51

2 Answers2

38

I propose the following four step program, where you first pick the low hanging fruit to give you some minimum of protection while you work on the bigger problems.

1. Activate client side filtering

1.1 Set the X-XSS-Protection header

Setting the following HTTP response header will turn on the browsers built in XSS protection:

X-XSS-Protection: 1; mode=block

This is by no means waterproof, and it only helps against reflected XSS, but it's something. Some old versions of IE (surprise, surprise) have a buggy filter that actually might make things worse, so you might want to filter out some user agents.

1.2 Set a content security policy

If you do not use inline JavaScript in your app, a CSP can help a lot. Setting script-src 'self' will (a) limit script tags to only include scripts from your own domain, and (b) disable inline scripts. So even if an attacker could inject <img onerror="alert('XSS')"> the browser will not execute the script. You will have to tailor the value you use for the header to your own use, but the linked MDN resource should help you with that.

But again, this is not waterproof. It does nothing to help users with a browser that doesn't implement CSP (see here). And if your source is littered with inline scripts you will have to choose between cleaning that up or abstaining from using CSP.

2. Activate server side filtering

John Wu has a good suggestion in comments:

Also, since this is .NET, a very quick and easy change can turn on ASP.NET Request Validation which can catch a variety of XSS attacks (but not 100% of them).

If you are working in another language, you might instead consider using a web application firewall (as suggested by xehpuk). How easy a WAF is to configure for you depends on what application you are protecting. If you are doing things that makes filtering inherently hard (e.g. pass HTML in GET or POST parameters) it might not be worth the effort to configure one.

But again, while a WAF might help, it is still not waterproof.

3. Scan and fix

Use an automated XSS scanner to find existing vulnerabilities and fix these. As a complement you can run your own manual tests. This will help you focus your precious time on fixing easy to find vulnerabilities, giving you the most bang for the buck in the early phase.

But for the third time, this is not waterproof. No matter how much you scan and test, you will miss something. So, unfortunately, there is a point #4 to this list...

4. Clean up your source code

Yes, you will "have to read every single page". Go through the source and rewrite all code that outputs data using some kind of framework or template library that handles XSS issues in a sane way. (You should probably pick a framework and start using it for the fixes you do under #3 already.)

This will take a lot of time, and it will be a pain in the a**, but it needs to be done. Look at it from the bright side - you have the opportunity to do some additional refactoring while you are at it. In the end you will not only have solved your security problem - you will have a better code base as well.

Anders
  • 64,406
  • 24
  • 178
  • 215
  • 2
    Thanks a lot. Didn't know about #1 and #2 ! Unfortunatly can't apply #2 because we have inline JS but that's good to know. I will fix big/obvious breaches first and fix the other one over the year as I come across. – kyori Oct 16 '17 at 12:49
  • 2
    For #4 I would do a global search of the variables that are not filtered. For example, the name field, do a global search for `.name` in all the templates or controllers. For #3 you can use Tinfoil security scan. For #2 I would extract all inline JS into `.js` files, set a unique class name, id, or data attribute on the element, and add a handler or event which executes the function. This [guide](http://guides.rubyonrails.org/v5.0/working_with_javascript_in_rails.html#unobtrusive-javascript) will explain - that section is not Rails specific. – Chloe Oct 16 '17 at 15:46
  • 1
    @kyori you can whitelist inline JS via the CSP. – TrickyDupes Oct 16 '17 at 17:55
  • @Trickycm Yes, but that sort of defeats the whole purpose. (Not completely, but almost.) – Anders Oct 16 '17 at 18:06
  • @anders, better with a little CSP help that none at all ;-) – TrickyDupes Oct 16 '17 at 18:07
  • You can put randomly-generated (on every page load) nonce values on the inline JS and set the CSP to only allow inline JS that has that nonce value. – Macil Oct 16 '17 at 20:21
  • 2
    Good answer. Also, since this is .NET, a very quick and easy change can turn on [ASP.NET Request Validation](https://www.owasp.org/index.php/ASP.NET_Request_Validation) which can catch a variety of XSS attacks (but not 100% of them). – John Wu Oct 16 '17 at 23:37
  • May be worth a mention now that the XSS Auditor is now on its way out! (Didn't last very long) – Conor Mancone Nov 12 '19 at 11:20
5

The short of it is that there is no easy solution. I have a suggestion for an "easy" solution at the bottom, but bear in mind that it has many caveats, which I will discuss here. First though, let's start from the big picture and work our way down.

In my experience (having worked with many legacy systems) "security hasn't been a priority for a long time" means that you likely have any number of security issues hiding in your system. XSS is just one issue, I'm sure. So unless you know someone is already on top of these I would concern myself with:

  1. Password security. I doubt you are hashing according to modern security standards, and this is a crtical problem which is otherwise easily overlooked.
  2. Credit card security. I hope you are PCI compliant and aren't storing credit cards on site. I've seen plenty of legacy systems that store credit cards even though you aren't supposed to.
  3. SQLi is probably a real problem, and is especially dangerous if you store passwords insecurely or credit cards in your database.
  4. XSS vulnerabilities!

The numbers aren't meant to imply priority: they are all top priorities.

The starting point

The most important thing is to fix this "institutionally". This is going to be the hardest to do, but is also the most critical. If you spend a few weeks fixing up all your XSS vulnerabilities, but security continues to be a bottom-tier priority, the problem is just going to come back the next time a developer outputs data unfiltered to the browser.

The best protections against XSS vulnerabilities is having developers that know to take security seriously and using a templating engine that properly handles XSS escaping for you. The key to remember is that with XSS you have to filter on output, not input. It's easy to see this as a one way problem "Clean the user data when it goes into the input, and then you're good". But this doesn't protect against all attack vectors, especially XSS added via SQLi. In general though, if XSS protection is something that your developers have to do remember to do everytime, it will end up being forgotten. That's why your best bet is to have that XSS protection built into your system. This is where a templating engine comes in. Any competent templating system automatically applies XSS filtering by default, and must specifically be told if it needs to not filter for XSS.

I'm sure that refactoring your system to include a templating engine specifically to take care of XSS vulnerabilities is probably not going to happen, but it is also important to understand that if you don't do something to fix the institutional problem that allowed this to happen in the first place, the problem is just going to come back, and the weeks it takes you to fix this will be wasted.

First practical steps

@Anders has some great starting points in his answer. A CSP and the XSS-header both work the same way: by telling the browser to enable XSS protection client side. Keep in mind (as @Anders mentioned) that these are browser-dependent and, especially for older browsers, may not be supported at all. In particular, IE's support for CSP is very minimal, even all the way up to IE11 (https://stackoverflow.com/questions/42937146/content-security-policy-does-not-work-in-internet-explorer-11)

The result is that while these steps are good starting points, you definitely cannot rely on them as your primary security: you still have to fix the problem on your end. Getting a good automated scanning tool is definitely the best way to get started. It will get you some immediate action items.

A partial solution

Another option you may have is to put XSS filtering across the board on your application. I don't normally recommend this, but I think the best bet for you is a multi-tiered response. The idea here is that you add some code to your applications bootstrapping process that checks all data incoming from the client (url data, POST data, cookies, REQUEST headers, etc...). You then perform some filtering to detect common XSS payloads, and if found reject the request all together.

The problem with blacklist filtering is that it can be very unreliable. If you read up on the OWASP XSS filter evasion cheat sheet you'll get a good idea of how difficult it can be to reliable filter out XSS vulnerabilities. However, it is a quick way to get some protection up on every request, so it may be worthwhile in your case. One important issue to keep in mind though is that this will generally stop WYSIWYG editors from working. That may or may not be a problem for you.

Conor Mancone
  • 29,899
  • 13
  • 91
  • 96
  • thanks A LOT, i didn't expect such high quality answer. I do understand and i like the idea of setting up mechanisms that takes care of XSS in the long term (Filtering inputs and using a template level protection). Unfortunatly i'm still a junior developer and it's a bit too much for my skills right now. For now i'll just talk to my manager and try to convince him about taking some serious time about this and if he doesn't care i'll just do minimum and hope. Thanks again – kyori Oct 16 '17 at 13:37
  • @kyori In that case there is one more thing you should consider: don't stay in this job for too long. I've worked with a number of business that (I suspect) are just like this. They have legacy software they use and sell, and which makes them money, but good coding practices left a long time ago. The future for such companies is rarely bright, but most importantly, you will pick up a lot of bad habits. Learn what you can, but what you will mainly learn is why doing things the wrong way sucks. For your next job, look for a company that does things better. Then you can really start growing – Conor Mancone Oct 16 '17 at 13:53