0

Maybe I am wrong but Routers should be able to see

  • if requests are doing many attempt of connection on same port (Brute Force/DDoS)
  • if requests are targeting all ports of a computer (port scanning)
  • and maybe more things that is easy to see as not normal internet use.

All this sort of bad Internet practice, can be temporary blocked for a minute may be by worldwide routers? (for example) Would that be too hard to set this kind of rules without blocking normal traffic ?

I see lot of end users face those things, even people don't know anything in internet.

Why Internet Service Providers are not doing anything about this ?

We all know bots are flooding internet...

Should end user (called Newbies by lamerz) be concerned by that, and learn 30 books of security before buying a computer?

Krishna Pandey
  • 1,497
  • 1
  • 16
  • 26
Froggiz
  • 301
  • 1
  • 10

3 Answers3

6
  1. It's an uphill battle against windmills. When you block obviously malicious traffic, black hats will start using less obviously malicious traffic which achieves the same goals. New security threats and exploits are discovered every day, and keeping up with all of them is practically impossible.
  2. There would be false-positives. Due to the complexity of possibly malicious traffic it would frequently happen that completely harmless internet activity is identified as malicious and innocent users get kicked off the net. This will cause inconvenience for the users and for the support staff of the internet service providers.
  3. It's not free. The high-performace routers used by backbone connections are already requiring quite a lot of processing power just to do normal routing of multiple gigabytes per second. When you also want to scan all the traffic for malicious actions, it will get far more costly. Hardware appliances which can do this do exist, and they are sometimes used to protect corporate networks. But when you want to add these to all networks, someone has to pay for it. Guess who that will be.
  4. It would slow down the internet. When you want to block malicious traffic, you need to examine it. To examine it, you need to look at it in context. That means you need to store traffic before you forward it. This increases latency which is important for applications like real-time communication or gaming.
  5. It might violate privacy laws. In some countries providers are not even allowed to look at the traffic of their users. Even with the current data retention laws which sprout all over the world, providers are obligated to save certain information about the behavior of their customers, but the laws usually forbid them from using the information for their own purposes. Only law inforcement is allowed to look at it.
Philipp
  • 48,867
  • 8
  • 127
  • 157
2

There are multiple reasons:

1) Routers are meant to route packets efficiently. Still many small organizations have Access Control Lists (ACLs) on their Edge Router to filter, allow, deny traffic based on their requirement. But this comes at cost of system resources as pointed by Philipp in above comment.

2)Router works on Network Layer and are not supposed to assemble the fragmented packets (may be intended for higher layers) and match it to malicious traffic signature. It's not practical.

3) There are other devices like Intrusion Prevention Systems, Network Firewalls, Web Application Firewalls, etc. at any Organization perimeter to filter for such traffics.

4) Most important, blocking IP address is not a solution. I recall one incident where some payment gateway blocked one IP address from which it was receiving malicious traffic, which in turn blocked some small country which was using that Public IP address to NAT entire country traffic.

5) You have no idea what amount of traffic backbone traffics deal with, given that any attacker will bounce his traffic through many proxies, maintaining such Dynamic ACL can be a problem.

6) Suppose a Computer from 10,000 strong Organization is infected and working as a BOT may be or generating malicious traffic. This may lead to blocking of entire Organization by ISP as they will mostly be using single IP address to route their Internet traffic.

I can think of these many reasons for now.

Krishna Pandey
  • 1,497
  • 1
  • 16
  • 26
1

Two great answers.

A couple of other points:

  1. An attacker could DOS any organization using spoofed IP addresses that matched "illegal" traffic. Just a few packets could blow any org off the internet even if that org did nothing wrong.
  2. Hackers will adjust behavior by scanning over time instead of all at once. Many scans already do this. Look at your WAN side firewall (you're logging and blocking all externally originating traffic on your system, right?). You'll find plenty of scans for open ports that occur in small, infrequent bursts.
  3. Many attacks are via http code injection bugs or misconfigurations at the application layer and not at Layer 2 or 3 where switches and routers play. Go check out OWASP for the most common attacks.
  4. Most website attacks don't occur via a single computer (denial of service or DOS). They are Distributed DOS. It can be very difficult to discern the difference between a DDOS and a suddenly hot website. How pissed off would a company be that just launched some amazing viral campaign only to have their site locked out to most of the Internet due to a misdiagnosed DDOS. In fact, continuing comment 1 above, an attacker could enlist backbone routers to DOS a website by first engaging in a DDOS, causing the rest of the Internet to be locked out.

The best thing is for ISPs to deliver a properly, default configured Gateway (router + modem) to residential customers. Most won't ever touch the configuration. Those that do most likely know what they're doing and caveat actor in that case. Many already do this, and that goes a long way to improving everyone's Internet experience.

Andrew Philips
  • 1,411
  • 8
  • 10