2
I'm expecting to limit ingress traffic of veth0
in namespace ns0
.
What I do is issuing the following commands:
# create netns
ip netns add ns0
# create veth pair
ip link add dev veth0 type veth peer name veth1
ip link set dev veth0 netns ns0
# set them UP ...
ip netns exec ns0 ip addr add ... # add ipv4 addr to veth0
# link veth1 to br0 which is a linux bridge connecting physical interface
# bond1 where testing traffic comes from.
ip link set dev veth1 master br0
# setup traffic control rules
ip netns exec ns0 tc qdisc add dev veth0 handle ffff: ingress
ip netns exec ns0 tc filter add dev veth0 parent ffff: protocol ip prio 1 u32 match ip src 0.0.0.0/0 police rate 100mbit burst 1mbit drop flowid :1
After all these, I expect iperf result to be around 100Mbps, but actually I only get about 14Mbps bandwidth.
Is there any implicit limitation on tc that I'm not aware of?
FWIW, I have yet to get line speeds when adding any tc-shaping with reasonable limits. To get line speeds, I have to set max rates to about 10x my actual line speeds -- have seen this problem on multiple algorithms. CPU isn't close to being used, Max down is listed at 25Mb, but is closer to 31Mb, while upload tops out around 10Mb. If I throw on traffic control, am luck to get 6-7 up, and 15-20Mb down (I do enter the faster measured speeds to start w/BTW. IF I find anything, will try to remember to post here... – Astara – 2018-07-16T21:53:16.200