70

How can an ISP with low bandwidth like 50 Gbps handle a DDoS attack with more than this? I know there is a solution called "Black Hole".

  • Is this enough to mitigate DDoS attacks or are there any other enterprise solutions?
  • What kind of DDoS mitigating services are now available?
  • Can CDN mitigate DDoS attack?
R1W
  • 1,617
  • 3
  • 15
  • 30

2 Answers2

99

There are a number of strategies, each having their own costs and benefits. Here are a few (there are more, and variations):

blackholing

By blackholing traffic, you discard all traffic towards the target IP address. Typically, ISP's try to use RTBH (remotely triggered blackholing), by which they can ask their upstream networks to discard the traffic, so it won't even reach the destination network. The benefit here is that it will not saturate the ISP's uplinks then. The biggest drawback here is that you do exactly what the attackers want: the target IP address (and thus the services running on it) is offline. However, the rest of the ISP's customers will not suffer from the attack, and the costs are low.

selective blackholing

Instead of blackholing an IP-address for the entire internet, it may be useful to change BGP routing for the targeted address range so that it's only reachable for parts of the internet. This is typically called 'selective blackholing' and is implemented by a number of large carriers. The idea is that many internet services only need to be available in a specific region (typically being a country or continent). For example, using selective blackholing, a Dutch ISP under attack could choose to have it's IP-ranges blackholed for traffic coming from China, while European IP's would be able to reach the targeted address. This technique can work very well if attack traffic is coming from very different sources than regular traffic.

scrubbing

A nicer solution is to use a scrubbing center, usually hosted outside the ISP's network as a service. When under DDoS attack, the ISP redirects traffic for that IP-range to the scrubbing center. The scrubbing center has the equipment to filter unwanted traffic, leaving a stream of (mostly) clean traffic which gets routed back to the ISP. Compared to blackholing this is a better solution since the services on the target IP remain available. The drawback is that most scrubbing centers are commercial, and can cost quite a lot. Also, scrubbing is not always easy, there can be both false positives (wanted traffic being filtered) and false negatives (unwanted traffic not being filtered).

traffic engineering

ISP networks usually have a number of connections to the internet via transit providers and/or internet exchange points. By making these connections, as well as links within the backbone of the ISP, much bigger than is needed for normal traffic patterns, the network can cope with DDoS attacks. However, there's a practical limit to this, since unused bandwidth capacity is costly (for example investing in 100Gbps equipment and upstream connections is very expensive and cost-inefficient if you're only doing a few Gbps) and this usually only moves the problem to somewhere within the network: somewhere there will be a switch, router or server with smaller capacity, and that will become the choke point.

With some attacks, ISP's may be able to balance incoming traffic in a way so not all external connections will be flooded, and only one or a few will become saturated.

Within larger networks, it's possible to create a "sinkhole" router which only attracts traffic for the IP-range under attack. Traffic towards all other IP-ranges gets routed over other routers. This way, the ISP is able to isolate the DDoS to a certain degree by announcing the targeted IP-range in BGP only on the sinkhole router, while stopping announcement of that IP-range on other routers. Traffic from the internet to that destination will be forced through that router. This may lead to all uplinks of that sinkhole router being saturated, but uplinks on other routers will not be flooded and other IP-ranges will not be affected.

The big drawback here is that the entire range in which the targeted IP is (at least a /24) may suffer from this. This solution is often the last resort.

local filtering

If the ISP has enough capacity on its uplinks (so they won't be saturated), they can implement local filtering. This can be done in various ways, for example:

  • adding an access list on routers rejecting traffic on characteristics like the source address or destination port. If the number of source IP-addresses in an attack is limited, this can work efficiently
  • implementing traffic rate limiters to reduce the amount of traffic to the target IP-address
  • routing traffic through local scrubbing boxes which filter unwanted traffic
  • implementing BGPFlowspec, which allows routers to implement an exchange filter rules using BGP (for example: 'reject all traffic from IP-address X to IP-address Y protocol UDP source port 123')

content delivery networks and load balancing

Web hosters can use content delivery networks (CDNs) to host their websites. CDNs use global load balancing and thus have enormous amounts of bandwidth and caching server clusters all over the world, making it hard to take down a website completely. If one set of servers goes down due to a DDoS, traffic gets redirected automatically to another cluster. A number of big CDNs also operate as scrubbing service.

On a somewhat smaller scale, local load balancing can be deployed. In that case, a pool of servers is available to host a website or web application. Traffic gets distributed over servers in that pool by a load balancer, thus increasing the amount of server capacity available, which may help to withstand a DDoS-attack.

Of course, CDNs and load balancing only work for hosting, it doesn't work for access ISP's.

R1W
  • 1,617
  • 3
  • 15
  • 30
Teun Vink
  • 6,788
  • 2
  • 27
  • 35
  • 1
    There's a real-world example [here](https://aastatus.net/apost.cgi?incident=2178) with a [related news article](http://aa.net.uk/news-20151119-dos.html) – Dezza Aug 25 '16 at 08:01
  • @TeunVink CDN's are also vulnerable and they can help us with the DDOS problem but they them self can be attacked also. – R1W Aug 28 '18 at 18:32
  • 2
    @R1- CDNs push the content closer to the requester and away from the source, so it is very, very difficult to DDoS all CDN nodes. You can DDoS certain regions, though. – schroeder Aug 29 '18 at 10:44
  • @schroeder I had related experience in this specific subject but problem was that video clips of the website distributed on CDN with low bandwidth and there was a disaster. – R1W Aug 29 '18 at 10:48
  • 2
    @R1- It is true that smaller CDNs are difficult to cope with DDoS. But your question is about ISP, which you can bet ISP will nuke the target to protect its own interest. – mootmoot Aug 31 '18 at 11:37
  • 1
    @mootmoot sometimes they have to for there own sake but sometimes they re-route customer to another cloud. – R1W Aug 31 '18 at 11:45
1

Good question!

Let's talk from the context of a practical scenario of a web application deployed into production and it gets attacked by an adversary with 10,000 requests per second at the application-layer coupled by a huge network-layered attack which is still manageable.

Certainly, ISPs can handle a DDoS attack but again depends on their overall setup and resources, infrastructure.

Case Study -

So, to start off, I feel privileged to answer this question since one of my clients recently faced a DDoS attack on his web application yet, it seemed very difficult to mitigate ask why?

Because I discovered it was an "Application-layer DDoS attack" and the smart attacker was targetting different resources instead of particular end-points which were not rate-limited

As you can imagine, key takeaways from this proposition were not only to implement a powerful CDN but also to improve their -

  1. Load Balancing
  2. Implement rate limiting through proper Nginx(or, any other server) server configuration, this rate-limiting again has a number of subsets of configuration based on the attack in consideration by which you can optimise your stealth and defend your application. Feel free to go through the documentation to read more about nginx rate limiting which is really the ideal one for applications handling a large load.
  3. Using the Akamai CDN which is considered very efficient and powerful in this regard when you are especially having a huge user base and dealing with the need to serve in real-time
  4. Implementing captcha on almost all imaginable fields within the application after 2 requests sent by the user, at peak conditions of the attack. Most popular applications throughout the web use Recaptcha and Recaptcha also blocks TOR exit-nodes and proxies (which are often used by attackers to circumvent network layer firewall rules by jumping from one TOR circuit to the next or using different proxies as a form of advanced evasion) which is a plus since in our case some of the malicious traffic seemed to be coming from a highly anonymised TOR network.
  5. And Finally, the application logic had to be changed so that rate-limiting couldn't be bypassed

At the ISP layer you can only control network-layer attacks viably without adversely affecting your users which also has some downsides. So blackholing may not be enough for your enterprise and on the other hand, might cause serious problems, usability issues in your application as this -

A key consequence of using blackhole routing when good traffic is also affected, is that the attacker has essentially accomplished their goal of disrupting traffic to the target network or service. Even though it can help a malicious actor accomplish their goal, blackhole routing can still be useful when the target of the attack is a small site that's part of a larger network. In that case, blackholing the traffic directed at the targeted site could protect the larger network from the effects of the attack.

This is what Cloudflare has to say in this regard. So blackholing essentially routes network traffic and drops it. But what about application layer or more deeper layered attacks which make use of multiple IP subnets? Traditional ways fail in those cases and you need to investigate and find the main vulnerability in your application which may be the fact that you are disclosing an internal server IP or a vulnerable endpoint though you are using Cloudflare, in which case Cloudflare fails too.

A Khan
  • 67
  • 5