I realise many similar questions have already been asked, but so far I've yet to find a solution to my problem.
I have a virtual linux server (running Debian Squeeze) that I use for testing of website speeds in order to measure increase and decrease in load time of said websites. I'm attempting to limit the bandwidth and latency of this server in order to be able to get close to real world load times on the websites, but have so far failed.
What I want specifically is the following:
- To set an incoming and outgoing latency of 50 ms.
- To set an incoming bandwidth limit of 512 kbps.
- To set an outgoing bandwidth limit of 4096 kbps.
I've been reading up on netem and using the tc
command, but it's still all a bit over my head. I've managed to put together this command to control the latency which seems to work, but I'm not even sure if that only handles the outgoing latency or both:
tc qdisc add dev eth0 root netem delay 50ms
Any network gurus around that can help me out?
Edit:
After further research I've gotten halfway to my goal, using this command all outgoing traffic behaves as I want it to:
tc qdisc add dev eth0 root tbf rate 4.0mbit latency 50ms burst 50kb mtu 10000
However, I still haven't been able to throttle the incoming traffic properly. I've learnt that I'm supposed to use an "Ingress Policer filter" I've been trying to do just that with the command below, playing around with different values, but no luck.
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 flowid :1 police rate 1.0mbit mtu 10000 burst 10k drop
The bandwidth is affected by the command though, the values above make the speed start at 2MB/s and, as a transfer progresses, slowly dropping down to around 80-90kB/s which it reaches after about 30 seconds of transfer.
Any ideas on what I'm doing wrong?