1

Using iperf3 to test the network with --length 1000 --no-delay:

iperf3 --interval 1 --time 3 --no-delay --length 1000 --parallel 100 --client 10.0.0.3

Result:

[SUM]   0.00-3.00   sec   835 MBytes  2.33 Gbits/sec   13             sender
[SUM]   0.00-3.01   sec   835 MBytes  2.33 Gbits/sec                  receiver

With --length 1 --no-delay:

iperf3 --interval 1 --time 3 --no-delay --length 1 --parallel 100 --client 10.0.0.3

The result:

[SUM]   0.00-3.00   sec   751 KBytes  2.05 Mbits/sec    5             sender
[SUM]   0.00-3.02   sec   751 KBytes  2.03 Mbits/sec                  receiver

-> I am getting 2Mbits/sec instead of 2Gbits/sec.

It seems like I am hitting some hardware or software limit somewhere prevents small packets from reaching a higher throughput

1) What could that limit be?
2) How to detect it? (on unix, is there a command to check it?)
3) Is there a way to increase that limit?

UPDATE:

Let's calculate the packets per seconds:
For 1000B payload: PPS = 2.33*1024*1024*1024/8/1000 = 313K
For 1B payload: PPS = 2.03*1024*1024/8/1 = 266K

Shouldn't we expect to have more packet when the payload is lower?

benji
  • 477
  • 1
  • 5
  • 11
  • 4
    I don't think this is on-topic. Apart from that there are many information online how the packet size affects the throughput and why. Just search for [packet size network performance](https://www.google.com/search?q=packet+size+network+performance). – Steffen Ullrich Jul 25 '17 at 16:37
  • 1
    +1 @Steffen but you're sending 1000 times less traffic and getting 1/1000th the throughput. What's the question? – quadruplebucky Jul 25 '17 at 20:31
  • @quadruplebucky why do you say I am sending 1000 times less traffic? iperf will send as much traffic as it can in both cases (in terms of Mbits/s) – benji Jul 25 '17 at 20:54
  • 1000 time less per packet. iperf measures what it sends, not the packet overhead. You're sending 1/1000th per unit, it's an insignificant amount compared to the IP overhead... – quadruplebucky Jul 25 '17 at 21:07
  • @quadruplebucky updated question – benji Jul 25 '17 at 22:25
  • **now** this is definitely off-topic: both len 1 and len 1000 in iperf are *almost certainly* contained in a single packet, but we're both guessing without a capture. Packets != bytes. – quadruplebucky Jul 25 '17 at 22:32
  • To me this suggests the primary bottleneck is not bandwidth. It's more likely to be a hardware (switch?), latency, host CPU, or software overhead. The TCP packet is likely to be 40 bytes, so you've reduced your packet size from 1040 bytes to 40 bytes. – Tim Jul 25 '17 at 22:33
  • 1
    @Tim CPU is very low (<5%), as is latency (~0.070ms). There might be software limitation (OS), that's what I am trying to figure out. – benji Jul 25 '17 at 22:41

0 Answers0