4

In my Linux router, I am using the following configuration to limit the rate of traffic towards port 44444 of a client in the LAN (client address is 192.168.10.2, connected through the router's eth1 iface):

tc qdisc add dev eth1 root handle 1: htb default 2
tc class add dev eth1 parent 1: classid 1:1 htb rate $RATE
tc class add dev eth1 parent 1: classid 1:2 htb rate 100mbit
tc filter add dev eth1 protocol ip parent 1: prio 1 u32 match ip dst 192.168.10.2 dport 44444 0xffff flowid 1:1

what I expect from this configuration is that traffic towards 192.168.10.2:44444 will be shaped according to the $RATE parameter, whereas all other traffic will be basically left untouched (as the LAN is a 100Mbit/s).

To test this configuration I'm sending UDP packets towards 192.168.10.2:44444 at various rates, keeping track of the number of packets lost and the one-way-delay variations. What I observed during my tests is that packets exceeding the rate never get discarded. Rather, packets get queued in a buffer that keeps on growing without (apparently) ever hitting the size limit.

For example:

Using RATE=30kbit and sending packets at a speed of around 2Mbit/s (packet payload 1400 bytes, packet spaced by 5ms) for 10 seconds, I get the following stats from tc:

qdisc htb 1: root refcnt 2 r2q 10 default 2 direct_packets_stat 0 ver 3.17
Sent 104901 bytes 85 pkt (dropped 0, overlimits 185 requeues 0)
backlog 0b 0p requeues 0

(Stats shown through tc -s -d qdisc show dev eth1)

In fact, packets get received by 192.168.10.2 for more than 26 seconds (i.e., 16 seconds after the sender has finished).

Using RATE=5mbit and sending packets at 20mbit, I get the following stats:

qdisc htb 1: root refcnt 2 r2q 10 default 2 direct_packets_stat 0 ver 3.17
Sent 6310526 bytes 4331 pkt (dropped 0, overlimits 8667 requeues 0)
backlog 0b 0p requeues 0

although the one-way delay this time doesn't grow higher than 160ms.

I got similar results specifying a burst size too, but I haven't observed any significant change, no matter how low I set it (I decreased it down to 1kbit).

Unfortunately, I can't find a reasonable explanation for these results, despite having read various manuals and references about the Linux tc and the htb. I'd be glad if anyone could help me figuring this out.

thanks


Update. I found a very useful and clear description of the Linux traffic controller's internals. You can find it here. Another useful resource is the OpenWRT Wiki. In reality, I did already know about the former article, but apparently I missed the important bits.

Long-story short, the buffer where my packets get queued is, of course, the egress queue of the network interface. Packets in the egress queue get selected for transmission according to the queueing discipline set through the tc command. Interestingly, the egress queue is not measured in bytes, but in packets (no matter how big the packets are). This is, in part, the reason why in my experiments I never managed to hit the queue size limit.

The egress queue size is shown through the ifconfig command (txqueue field). In my Linux box, the default egress size is 1000 packets, but you can easily change it through ifconfig DEV txqueuelen SIZE. By decreasing the queue size to 1 packet I finally managed to force shaped packets to get discarded (never reaching the client). So I guess that's basically it.

One last interesting fact I noticed is that, the token bucket filter (tbf) as opposite to the hierarchycal bucket filter does provide an extra buffer where packets are queued before getting transmitted. I guess using that filter with a small enough queue you can force packets to be dropped no matter how big the egress queue is. I haven't experimented with it though.

Hope this helps.

bmarcov
  • 41
  • 3
  • I struggled with this exact question until I found this. A big thank you for posting the update clarifying how it works! You should post your update as an answer. – K Erlandsson Feb 11 '19 at 20:00
  • Reading those articles a bit more I think your update is slightly incorrect. The "egress queue" as you write it, is actually the queue inside the TC qdisc. In the HTB case, the size of the queue is inherited from txqueuelen by default. [this patch](https://patchwork.ozlabs.org/patch/274798/) allows you to set the queue length by specifying direct_qlen (which is not in the man page but in the tc qdisc help). In other words, TBF doesn't have an extra buffer, it is just better documented how you set its size. – K Erlandsson Feb 11 '19 at 20:55

0 Answers0