11

I tested a line for its link quality with iperf. The measured speed (UDP port 9005) was 96Mbps, which is fine, because both servers are connected with 100Mbps to the internet. On the other hand the datagram loss rate was shown to be 3.3-3.7%, which I found a little too much. Using a high-speed transfer protocol I recorded the packets on both sides with tcpdump. Than I calculated the packet loss - average 0.25%. Have anyone an explanation, where this big difference may be coming from? What is an acceptable packet loss in your opinion?

stefita
  • 113
  • 1
  • 1
  • 7

4 Answers4

3

I've experienced significant dataloss with iPerf in UDP mode as a result of the CPU not being able to keep up. For some reason, iPerf with UDP seems to be much more CPU intensive than iPerf with TCP. Do you experience the same loss percentages when you set iPerf to half the rate?

To answer your second question about how much packet loss is acceptable, it really depends on what application you are running, how much traffic you've got. Really, there shouldn't be any loss if you are under your bandwidth limit. For most things, I probably wouldn't complain too much about .25%, but that is still a lot of loss if you are running at really high rates.

[EDIT 1] Some other thoughts that I've had on the topic:

  1. Try incrementing the rates of iPerf. If there is a systemic problem somewhere, it is likely that you'll experience the same percentage of loss no matter what the rate. If you are at the limits of your hardware, or your provider does some sort of RED, then there will likely be no loss up to a certain rate, and then incrementally worse loss the higher above that you go.
  2. Do your tcpdump measurement of the iPerf session, just to verify that your tests are accurate.
  3. Try iPerf with TCP. This won't report loss, but if you are getting loss then the connection won't be able to scale up very high. Since latency will also affect this, make sure to test to an endpoint with as little latency as possible.
  4. Depending on what gear you have on the inside of your connection, make sure you are as close it it as possible. E.g. if you have multiple switches between your test system and the edge router, move to a directly connected switch.
  5. If you have a managed switch, check the stats on it to make sure the loss isn't occurring there. I've encountered some cheaper switches that start dropping when you get close to 100Mbps of UDP traffic on them (mostly old and cheap unmanaged switches though).
  6. Try simultaneous iPerfs from two different clients to two different hosts, so that you can be sure the limit isn't a result of CPU or a cheap local NIC card.
Jed Daniels
  • 7,172
  • 2
  • 33
  • 41
  • That could be a good reason. Unfortunately I cant test right now, because of firewall problems. I'll get back to your answer as soon as I performed a new test. – stefita May 28 '10 at 07:32
0

Well, with TCP there are mechanisms to maximize the utilization of one flow, with UDP there aren't. So each application have to create their own mechanisms, so probably each application use a different approach to do that. Probably, Iperf will allow more packets lost because is trying to reach the maximum available bandwidth with out care about if the information is received or not. The other application probably will try to not lost many packets, and will reduce the packet rate to the available throughput in the connection.

Pipe
  • 191
  • 3
0

Have you used tcpdump to check the packet loss when using iPerf to make sure the packet loss you calculate with tcpdump matches iperf?

You may discover that your measurement methods are not comparable.

Craig
  • 591
  • 2
  • 5
0

does iperf automatically discard packets that arrive out of sequence with UDP? You might be looking at a little bit of jitter on the connection.

Lloyd Baker
  • 149
  • 4