0

I've been working with Ubuntu18.04 and trying to shape the traffic with linux tc. Things went well in the passed few months. Here's my commands:

# init queue
sudo tc qdisc add dev enp2s0 root handle 1:0 tbf rate 20mbit limit 16k burst 10k
sudo tc qdisc add dev enp2s0 parent 1:0 handle 10: netem rate 20mbit

# continuously adjust the traffic using following command with python
sudo tc qdisc add dev enp2s0 parent 1:0 handle 10: netem rate <bandwidth>kbit delay <rtt>ms loss <loss>%

However recent days I noticed that TBF seemed to stop working.

How do I know that

I used iperf3 to test the link:

# receiver, a windows pc
iperf3 -s

# sender, a linux PC performing tc & iperf client
iperf3 -u -c <receiver's ip> -b 1.5M -t 1000

The bandwidth was set to vary around 1M bps.

  1. I observed a huge lag between the fluctuation of bandwidth set on the sender and throughput observed on the receiver.
  2. After the sender exited, the receiver could still receive remaining packets in a few seconds. (sum up to around 5~10 Mbit)
  3. Things went on normally when I tried to iperf with TCP: iperf3 -c <receiver's ip> -b 1.5M -t 1000. I think it was because TCP had a scheme to perf the bandwidth and would not produce excessive packets. And it's why I think it was TBF that failed instead of other components.

I've tried

  1. replacing the network card
  2. changing iperf client
  3. replacing the cable

None of the above helped.

1 Answers1

0

Well, iperf client in UDP mode sends the data at a rate you configure with -b flag, in your case 1.5mbps. If you set it to 10mbps, it will send at that speed and report it as such. Actual attained bandwidth in this case is reported on the server side, that is the receiver, as the client has no means of getting it with UDP.

The tbf queueing discipline is a classic Token Bucket. Simply put, it will pass the traffic at a configured rate and buffer the excess up until the buffer is overflown. The buffered traffic will be delivered in received order after the congestion clears itself, and yes, it will be delayed significantly if you configure your rate in kilobits.

So what you see is actually expected if you dump a few megabits down the shaper set to much smaller value - sender (client) will just dump data at a rate set with -b and leave the building; TBF will pass what it is allowed to and buffer the rest, some data will be lost due to buffer overflows; receiver will receive data with a lag due to excessive buffering. With UDP you should look at statistics at the receiver end, not on the sender side.

Peter Zhabin
  • 2,276
  • 8
  • 10
  • Thanks! But I believe I had set the size of the tbf buffer to 16kB with `sudo tc qdisc add dev enp2s0 root handle 1:0 tbf rate 20mbit limit 16k burst 10k`. But as I mentioned, the receiver could still receive ~10Mb packets after the sender stopped to send any. – MingXuan Yan Jul 21 '22 at 12:29