13

I've never quite understood whether or not it's possible to rate-limit incoming traffic. I realize that there is no direct method whereby to control the remote server's rate of sending packets (unless you're in control of both endpoints), but taking this limitation into account, how exactly do download managers allow me to successfully set download speed limits?

Is there any link between TCP slow-start and rate-limiting incoming traffic? Is it possible to use the methods described by slow-start to artificially limit the sender's rate of sending packets?

As an additional consideration, it should be noted that the server on which I'd like to implement traffic shaping establishes the PPPoE connection itself, and acts as a router for the rest of the network.


Update: The answers thus far have given a fair overview of the questions I've asked, but I still don't know how download managers are able to limit incoming traffic, and more specifically, whether it's possible to implement a similar strategy on a Linux gateway box.

Richard Keller
  • 2,270
  • 2
  • 18
  • 31
  • Free Download Manager probably uses their own upload servers and torrent clients mostly limit the number of connections they use. Also, have you looked in to 'QOS'? – DutchUncle Jun 13 '11 at 18:02
  • 3
    Most download managers simply rate limit the ACK sent back, thereby slowing the incoming steam of data. – Chris S Jun 16 '11 at 19:11

6 Answers6

12

The download managers most likely work as explained in the the trickle paper.

A process utilizing BSD sockets may perform its own rate limiting. For upstream limiting, the application can do this by simply limiting the rate of data that is written to a socket. Similarly, for downstream limiting, an application may limit the rate of data it reads from a socket. However, the reason why this works is not immediately obvious. When the application neglects to read some data from a socket, its socket receive buffers fill up. This in turn will cause the receiving TCP to advertise a smaller receiver window (rwnd), creating back pressure on the underlying TCP connection thus limiting its data flow. Eventually this “trickle-down” effect achieves end-to-end rate limiting. Depending on buffering in all layers of the network stack, this effect may take some time to propagate.

If you occasionally need to rate-limit a single program on a UNIX system, a simple solution is trickle. Real traffic shaping, like you would perform on a gateway, can be done with tc. This is documented in Chapter 9. Queueing Disciplines for Bandwidth Management of the Linux Advanced Routing & Traffic Control HOWTO.

sciurus
  • 12,493
  • 2
  • 30
  • 49
4

In the case of a 56k modem versus a 4 Mbps DSl line, there's (usually) no shaping making the speed difference, it's just a difference in the speed of the link.

The reason why it's hard to shape on incoming traffic is that there's no buffer in the transmission medium. You either accept the incoming bits or they're lost. For traffic taht is about to leave an interface, you can buffer and re-order packets as much as you want (or, at least up to the available buffer memory in the device).

For protocols that have a layer on top of TCP (I don't know if that is the case for torrents), it would be a simple matter of pacing requests for further data. Otherwise, the application would need to communicate with the OS, to delay ACKing the packets. Most UDP-based protocols will, by necessity have ACK/resend logic in the app-specific protocol, so at that point it's bordering on trivial to pace them.

One possible route would be to shape the traffic from Internet on the LAN side of your DSL router, as at that point, you'd be shaping on an egress port.

Vatine
  • 5,390
  • 23
  • 24
3

I can't answer why you haven't found any solutions that permit shaping incoming data (and don't know any off the top of my head), but as to how the sender knows how fast the receiver can receive data:

The basic design of TCP/IP is that for every packet that the source sends to the destination, it has to wait for the destination to reply back (with an ACK packet) saying that it received the packet.

So if You have a 4Mbps sender and a 56Kbps receiver, then the sender has to sit and wait between sending packets for the receiver to respond to each packet (There are some technical details to reduce this overhead, but the premise still holds on an abstract level).

So what happens if the sender is already sending 56Kbps of data and then tries to send a bit more?

The packet gets lost. (Well, potentially queued in a switch's buffer, but when that fills up, the packet gets lost). Since the packet got lost, the receiver never receives it, and therefore never sends an ACK packet. Since the sender never receives this ACK packet (because it was never sent, but also it could be lost, or there could be a network disruption), the sender is required to resend the extra packet. It sits and attempts to resend the packet until it gets through and the ACK reply gets back to it.

So, to recap, once the sender has maxed out the receiver's bandwidth, it has to stop and resend the next packet over and over again until there is enough available bandwidth for it to get through. This effectively reduces the send speed to the maximum that the client can receive at. (And there's optimization methods to reduce the number of times a packet has to be resent in this case, where basically the sender slows down each time it has to resend a packet, but that's beyond the scope of this simplified description.

Darth Android
  • 545
  • 5
  • 12
1

You can do it with a ifb interface.

With a ifb interface, you can route the ingress flow of eth0 (or whatever) to the egress in ifb0 (for example), and apply there the limit rules.

Check this url of the Linux Foundation: http://www.linuxfoundation.org/collaborate/workgroups/networking/ifb

And this scripts that limit incomming and outcomming bandwight: https://github.com/rfrail3/misc/blob/master/tc/traffic-control.sh

0

Check out wondershaper: http://lartc.org/wondershaper/

Regarding incoming traffic:

This is slightly trickier as we can't really influence how fast the internet
ships us data. We can however drop packets that are coming in too fast,
which causes TCP/IP to slow down to just the rate we want. Because we don't
want to drop traffic unnecessarily, we configure a 'burst' size we allow at
higher speed.

Now, once we have done this, we have eliminated the downstream queue totally
(except for short bursts), and gain the ability to manage the upstream queue
with all the power Linux offers.
dmourati
  • 24,720
  • 2
  • 40
  • 69
-1

use UFW (Uncomplicated Fire Wall) http://ubuntuforums.org/showthread.php?t=1260812

This thread shows a simple example that defaults to IPs with 6 hits within 30 secs are denied:

sudo ufw limit ssh/tcp

Also a more 'advanced' command with specified values for time and hit count

sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH

sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 8 --rttl --name SSH -j DROP