6

Recently I read about Denial of Service attack on Amazon & PayPal. I am curious that how this is performed. These big companies must have huge servers, so DOS would require billions of bots to access it.

So my questions are

  1. How DDOS attack performed at this level?
  2. How to know beforehand that this is attack is happening?
  3. How to prevent this (how to distinguish from bot vs user, apart from common captcha way that is done)
Jason
  • 103
  • 2
ashmish2
  • 375
  • 3
  • 6
  • By the way Amazon claims it was a hardware failure. Not so sure of that, though... – Camilo Martin Dec 13 '10 at 10:50
  • your mention of *DDOS* and *CAPTCHA* in the same question leads one to believe you are confusing two types of attack: brute forcing of encryption or authentication keys/mechanisms, or denial of service of network resources. Wikipedia contains more than enough info to clarify the differences: http://en.wikipedia.org/wiki/Denial-of-service_attack & http://en.wikipedia.org/wiki/Brute_force_attack – Zayne S Halsall Dec 14 '10 at 12:33

2 Answers2

5

Fundamentally, such denial of service attacks involve sending the server more requests than it can handle. It can be a large number of bots sending simple requests (though it does not require billions to bring down a single server - a few thousand tops) or a handful of bots sending requests that are notoriously long to execute.

The second attack type is the most vicious, because a single bot could conceivably bring down a server. For instance, MySQL's LIMIT N OFFSET M is notoriously slow when M becomes large, so a simple attack would be to request pages 200-300 out of 500 in quick succession, clogging all the MySQL worker threads. On an unprotected server, this can be done with firebug. The only solution is to identify costly operations and then either optimize the hell out of them, make them sequential (so that clogging that part of the site does not bring down the rest of the site), or detect IPs that ask for costly operations and refuse to perform that operation unless a certain wait interval is respected.

The first attack type is harder to pull off, because you need many bots. On the other hand, it's also harder to stop from the server: if you have thousands of bots sending you data as fast as they can, your bandwidth will be eaten up by the flood and there's nothing the server can do about it (even if it flat out refuses 99% of those requests), so a router with flood prevention is a good bet if you think you might be a target.

  • 2
    Another addition to this that you can mention, and that was notoriously used in the hacks of DDoS of Yahoo! and others in 2000 is that you can use your targets against one another. If you spoof the origin of our packets sent to target A to appear as if they came from target B, then target A will send the responses to target B. Immediate bonus points for the attacker, as you get a stronger load on BOTH your targets, and a lesser one on your bots who don't see their bandwidth eaten by the rseponses. – haylem Dec 13 '10 at 23:33
  • 1
    @Victor, +1 for a reasonable (if not complete) answer. The largest problem with D/DOS attacks is that stopping them is not practical in any sense. The router with "flood protection" mentioned is essentially useless, if your service is intended for multiple users and to be publicly available. You would need your upstream provider to protect you if you intend to continue servicing users, as regardless of whether you drop the packets, they're still traveling down your pipe. Wikipedia has a great explanation. – Zayne S Halsall Dec 14 '10 at 12:30
  • Exactly, Zayne. It doesn't matter if you've a traffic jam in front of your buildings, or in the street leading to the highway. It's still in your way. So the best defense against DDoS is to have an upstream provider that can recognize an attack, block the source(s), and then reroute legitimate traffic. No small order, that. – George Erhard Mar 24 '17 at 15:19
0

There really is no way to prevent a truly distributed dos attack as there is no difference from that and handling a surge of legitimate traffic. (serving captchas can prevent an attack from tying up long running processes or heavy resource usage, but enough of an attack will overwhelm your captcha serving bandwidth as well).