80

As far as I know, I have never heard of or seen any large scale web sites like Amazon, Microsoft, Apple, Google, or Ebay ever suffer from DDoS. Have you?

I have a personal philosophy that the bigger you are, the more of a target you are for such attacks. Imagine the brownie points you would get if you could bring down a major website.

Yet, such sites have always remained sturdy and seemly invincible. What security measures have they implemented and can these be applied to smaller businesses?

Lakitu
  • 931
  • 1
  • 8
  • 7
  • 3
    So have I. 10-15 years ago a number of kidiot botmasters were able to DDoS major sites off the net. One example: http://www.theregister.co.uk/2005/12/28/ebay_bots_ddos/ – Dan Is Fiddling By Firelight Nov 21 '14 at 21:41
  • 16
    Google's main DDoS protection is that they've got a highly geographically distributed system with more bandwidth than anyone but a state-level attacker. That's not something that a smaller business can use. – Mark Nov 21 '14 at 22:30
  • 2
    @Mark, that needs to be explored. There are perks if you run virtual machines and other appliances on their network. – Filip Dupanović Nov 22 '14 at 01:14
  • @NewWorld Well google for it, it's not hard to find. One example is PayPal (Ebay-owned) being DDoSed by "Anonymous" not too long ago. – Luc Nov 23 '14 at 13:42
  • Or VISA some 2-3 or so years ago. – Damon Nov 23 '14 at 15:48
  • Konami's Metal Gear Online 2 servers were DDoSed frequently until they decided to shut down the servers. The DDoSers were only kids, mind you. – Git Gud Nov 23 '14 at 16:16
  • [Here](http://money.cnn.com/2010/12/09/technology/amazon_wikileaks_attack/) an article I read a while back on how Amazon prevented Anonymous from launching a DDoS against them – Ranhiru Jude Cooray Nov 24 '14 at 07:46
  • Blizzard also suffered from DDoS attacks against World of Warcraft. – Philipp Nov 24 '14 at 12:21
  • @Philipp So has Valve, against Dota servers. They recently wrote a blog post briefly explaining some of the steps they've taken to mitigate these attacks: http://blog.dota2.com/2014/11/network-update/ – Ajedi32 Nov 25 '14 at 15:48
  • Mojang (Minecraft) login servers have been DDoS more than I can count - I remember the days when I played Minecraft, and at random, normally long intervals during peak times, no one could login to **any** server. – Ben Aubin May 11 '16 at 13:47

5 Answers5

66

They generally have a very layered approach. Here are some things I've either implemented or seen implemented at large organizations. To your specific question on smaller businesses you generally would find a 3rd party provider to protect you. Depending on your use case this may be a cloud provider, a CDN, a BGP routed solution, or a DNS-based solution.

Bandwidth Oversubscription - This one is fairly straightforward. As you grow larger, your bandwidth costs drop. Generally large organizations will lease a significantly larger capacity than they need to account for growth and DDoS attacks. If an attacker is unable to muster enough traffic to overwhelm this, a volumetric attack is generally ineffective.

Automated Mitigation - Many tools will monitor netflow data from routers and other data sources to determine a baseline for traffic. If traffic patterns step out of these zones, DDoS mitigation tools can attract the traffic to them using BGP or other mechanisms and filter out noise. They then pass the clean traffic further into the network. These tools can generally detect both volumetric attacks, and more insidious attacks such as slowloris.

Upstream Blackholing - There are ways to filter UDP traffic using router blackholing. I've seen situations where a business has no need to receive UDP traffic (i.e. NTP and DNS) to their infrastructure, so they have their transit providers blackhole all of this traffic. The largest volumetric attacks out there are generally reflected NTP or DNS amplification attacks.

Third Party Provider - Even many fairly large organizations fear that monster 300 Gbps attack. They often implement either a DNS-based redirect service or a BGP-based service to protect them in case they suffer a sustained attack. I would say CDN providers also fall under this umbrella, since they can help an organization stay online during an attack.

System Hardening - You can often configure both your operating system and your applications to be more resilient to application layer DDoS attacks. Things such as ensuring enough inodes on your Linux server to configuring the right number of Apache worker threads can help make it harder for an attacker to take down your service.

theterribletrivium
  • 2,679
  • 17
  • 18
  • 2
    Excellent list. I would suggest rate limiting be added as a separate section, since it can occur at the upstream/hardware, 3rd party and system/application layer. Generally application servers (the ones performing computation and business rules, not just serving files) can reliably refuse requests from any but a small list of pre-approved clients (via an IP whitelist). – Patrick M Nov 22 '14 at 04:16
  • Another approach which is often taken is secrecy about where the weakest spots are and just how close an attack may have been to cause actual problems. For example if you are using a CDN, you usually have a server behind it, and attacking that server directly would be more effective than attacking the CDN, so the IP of that server is kept secret. If an attacker attempts a DDoS attack and fails to cause any visible problems, he will probably give up and use the bot net for something else. – kasperd Nov 24 '14 at 00:22
  • How likely is an attacker to come back and try again? If the attacker somehow got word that he managed to push the system to 90% usage, he would likely ramp up his bot net and come back with another attack. But if he doesn't get that information, he might not bother to try again, since for all he knows, it might take an order of magnitude more traffic to perform the attack. – kasperd Nov 24 '14 at 00:25
13

While there are no real counter measures for DDOS, there are someways to control it.
First is by using a Content Delivery Network, Using several data centers across the world to serve contents to visitors from different geographical areas. This helps to eliminate single point of failure and makes it harder to exhaust resources or saturate the links and balance the attack load.
Another way is to work closely with major backbones, ISPs and respective organizations to block the attacker IPs in the most specific network as possible to prevent their traffic from reaching their targets. Hope it helps.

Sam
  • 426
  • 3
  • 15
  • 5
    CDN is the most readily attainable solution for a smaller businesses. – KnightHawk Nov 21 '14 at 20:11
  • @JosephNeathawk I don't think so. Facebook and Microsoft are using CDNs already. – Sam Nov 24 '14 at 08:08
  • 2
    I'm not sure what is meant by your comment. I was agreeing that CDN is a good choice and that it is easily available to small businesses. (easily available because of its low or free price tag) The fact that a large company uses it does not change anything for small businesses who can still get cheaper priced versions. – KnightHawk Nov 24 '14 at 15:02
  • @JosephNeathawk Sorry! I got it wrong :) – Sam Nov 25 '14 at 19:20
13

As a mid-sized company, we use a DOS mitigation service to reduce the risk of our website from being knocked offline. Our site resolves to the provider's IP address. The provider then forwards the request to our webserver. Our webserver only communicate with the provider.

They then use their tools to determine if certain attacks are actual attacks by using a variety of monitoring and correlation tools. If there is deemed to be an attack, the provider does not forward the request to our web servers and soaks up the attack. In order to be able to perform this type of mitigation, your capacity must exceed that of what the attacker is trying to deliver. With larger companies that normally expect a larger bandwidth capacity, I would expect that they either outsource to ISPs or create an internal system to perform the same mitigation strategy.

pr-
  • 782
  • 1
  • 4
  • 21
6

My company has dealt with DDoS attacks up to 180gbps and here are my techniques that I have used to mitigate.

The size of a website doesn't only make it a bigger target, things that also play a significant role are:

  • Public relations (Are you marketing yourself as something you are not, what people are you targeting)
  • Delivering on promises
  • Treating customers the right way

Motives for DDoS attacks include but are not limited to the following:

  • Fame ("Oh look at me, I managed to take this site down")
  • Money (Larger sites are more expensive to attack, generally if they are looking for money they will target smaller with high revenues that do not have a large technical team)
  • Activism

Also (from one of the comments):

  • Another motive is that they are trying to distract you. For example, if they want to attack Apache so you are busy fixing that while they bruteforce your SSH password.

There are many different types of DDoS attacks and how they are initiated, first you need to get the points I listed above in order, then your DDoS attacks will likely decrease. This doesn't mean that you will not experience them anymore, you simply give people less of a motive to attack you.

On a technical level, there are multiple things to consider because most businesses have multiple nodes in their infrastructure. In some cases, each node requires a different type of approach. In my case these nodes were an API, a game server, an authentication server, a database, and a social server. Step 1 was to make sure that you are never exposing an IP address that does not need to be exposed. In my case those were the authentication server, database and social server. Generally limiting the points of failure is a good approach to start with. Protection is incredibly expensive, and it's only good to have the most resilient protection where you really need it most.

After you have determined which points are required to be public, you can protect each function individually in the way they need to be protected. theterriblevitrium gave an excellent answer to techniques, here are my 2 cents on that one.

  • Anycast (For example, a CDN. This works incredibly well for static nodes such as local APIs, DNS servers and web servers, the downside with this is that it currently does not work effectively for systems that have a single point of failure such as game servers)
  • Network rules & Packet Inspection (E.G. each connection can only take up X kb of traffic per second and each packet should match pattern x, y, or z. This worked well for our games. Downside is that if they hit your bandwidth limit, you're out of luck. )

Feel free to ask any questions!

Pim de Witte
  • 426
  • 2
  • 5
0

I'm not sure if this was what you were getting at with Automated Mitigation mentioned by @theterribletrivium, but they also use load balancers to distribute traffic evenly to separate servers in order for them to run as fast as possible.

Although it's not the most effective way to evenly distribute users to servers, Google uses what is called Round-robin DNS. Round-robin DNS will return multiple IP addresses and the user will connect to one of those IP addresses. The problem with this however is that the same IP address can be determined by multiple computers to connect to, rendering the other servers faster and not used.

They use a similar setup for dealing with the large amounts of information that is stored. Google uses what it calls BigTable to store information related to Google Maps, Blogger, YouTube, GMail and more. It is reported that Google uses hundreds of thousands servers in order to store all of this information and to make their websites running as fast as possible.

They use software (that they probably developed themselves) to host their websites and without using a large amount of memory and CPU. The most popular web server, Apache, is most definitely not used by these large websites because of how it fails handle such heavy loads and it is affected by the C10k problem. The C10k problem causes web servers (such as Apache) to fail and sometimes shutdown when more than 10,000 connections are made to the web server at the same time (which Google probably has a lot more than 10,000 connections at any one time).

The servers and the hardware they use are top of the line. According to the Wikipedia article on the Google platform though, Google doesn't use hardware that performs the best, they use hardware that is the best bang for the buck.

If you think about it, websites such as Google, Amazon, Microsoft, and Apple are technically always under a DDoS attack. But they have such advanced technologies in place that allow their websites to be accessed by everyone without being shutdown.

SameOldNick
  • 729
  • 3
  • 10
  • 22
  • Regarding hardware used, I think the point of Google is they explicitly do not use top of the line hardware. They do use a top of the line number of devices instead. – Volker Siegel Nov 23 '14 at 20:28