1

Like what Github.com has just experienced, what if a script provider, such as google ads, google analysis or something similar, add a malicious scripts at the end of their script, and let end users flood into the target? What if he/she forbid the target script to execute? What if he/she let the script request random path of the target domain?

How to defense such an attack?

simonmysun
  • 113
  • 5
  • The critical point is that the attack source are the end users which we cannot ban. – simonmysun Mar 30 '15 at 02:55
  • In this attack, the GFW operated in reverse, adding a malicious JavaScript to the Baidu web site, but only for visitors outside China. Most small web sites have no chance to survive an attack like this. – Michael Hampton Mar 30 '15 at 17:43
  • Actually if the GFW change the code for visitors inside China, the tremendous fluency will meanwhile trigger the GFW itself(The target page contains keywords under censorship), which . And I guess it's not that easy to perform such man-in-the-middle attacks inside China. I just found this kind of attack is so low-cost. In China there are so many cross-site scripts providers and CDN providers. It will be terrible if one of them get hijacked. – simonmysun Apr 01 '15 at 04:50
  • China is in bad shape anyway because so many people are still using Windows XP. – Michael Hampton Apr 01 '15 at 05:06

1 Answers1

0

It's not easy. Basically you want to block the traffic as early as possible.
That is normally your firewalls or even earlier your ISP.
With such an attack that is usually not flexible enough as you can only block IP networks.

You can route our entire traffic to a vendor that specializes in DDoS mitigation.
They have huge bandwidth resources.
This obviously needs to be well prepared.
For some of those providers you need to be able to change BGB routes.
Those are called scrubbing centers or clean pipes providers.
They take all your traffic and filter out bad traffic, sending only "clean" traffic to your servers.

There it's a matter of creating rules of catching the attack traffic.
In the easiest case it's always the same URL that gets requested.
=> Drop repeated traffic to that single URL.

This gets more complicated when the URLs are random.
=> If an IP hits a pages that return a 404 more than X times in Y minutes, block it.

And even more complicated when the URLs are random but existing.
Then you need to detect anomaly in traffic.

You will block some of your real users if they are using infected pages that are used to DDoS you.
That's the price to pay to keep your services available to the rest of your users.

faker
  • 17,326
  • 2
  • 60
  • 69
  • It seems to me the best place to "catch" this would be the user's browser. (ie don't allow more than x links to external domains / don't allow links to IP addresses ) As a "side effect" it would eliminate most of the ad and stat link infestation found in most pages. – dimitri.p Mar 30 '15 at 18:32
  • But you have no control over 3rd party compromised sites that reload your site in a hidden iframe. You can contact them, sure. But that takes time. – faker Mar 30 '15 at 18:36