An IP range has worse performance than others on CentOS 7

0

I have a server, a laptop, a desktop and a home grade router connected to a gigabit switch. My issue is that the server has limited bandwidth to the 192.168.1.0/24 network. This issue is not present when using IPv6 link-local addresses, or when a client -- such as the laptop or the desktop -- connects to the server from the internet.

I have ruled out the possibility, that the issue was caused by a client, as all clients exhibit the same behaviour.

My setup;

  1. has two network segments, 192.168.1.0/24 and the internet
  2. all of my devices are directly connected to the same switch
  3. all of my devices have an IP in the 192.168.1.0/24 range
  4. the internet can be reached through a gateway -- the router at 192.168.1.1, NAT is obviously present
  5. there is no IPv6, other than the link-locals that are automatically configured
  6. there are no other devices, such as firewall appliances

Here I demonstrate my issue by running iperf on the server in client mode using the -c flag.

[user@srv ~]$ iperf3 -c 192.168.1.115
Connecting to host 192.168.1.115, port 5201
[  4] local 192.168.1.40 port 47062 connected to 192.168.1.115 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   853 KBytes  6.98 Mbits/sec    0   59.4 KBytes # Only 6,98 Mbits/sec!
...

[user@srv ~]$ iperf3 -c fe80::[redacted]%eth0
Connecting to host fe80::[redacted]%eth0, port 5201
[  4] local fe80::[redacted] port 36236 connected to fe80::[redacted] port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   111 MBytes   929 Mbits/sec    0    225 KBytes
...
[user@srv ~]$ 

Please note;

  1. how connecting to a client (an iperf server in this case) in the 192.168.1.0/24 network, experiences very poor bandwidth.
  2. how using IPv6 link-local addresses do not exhibit this issue.
  3. how no IPv4 client connecting from the internet experiences this issue.

The iperf server, gives me equivalent results. This tells me that the limitation of bandwidth, is symmetrical.

I have run iperf between the laptop and the desktop using IPv4, with no issues. This rules out the possibility that the switch, links, or devices are at fault.

Everything points at the server. I can't recall configuring any traffic shaping on it.

The server runs CentOS 7, the desktop Windows 10, and the laptop Fedora 26.

A notable thing about the 192.168.1.0/24 network is that the servers firewall is configured like this:

# firewall-cmd --zone=public --add-interface=eth0
# firewall-cmd --zone=home --add-source=192.168.1.0/24

I don't think that such a simple rule could cause so abysmal performance. I have tried disabling firewalld:

# systemctl stop firewalld

With no luck. I think this rules out the possibility that the issue is caused by iptables.

netstat -i shows nothing alarming, the physical links are fine.

I run libvirt, I know it has the capability to control iptables. There are virtual network segments, such as 192.168.122.0/24, which get natted/routed/dhcp served/dnsmasq'ed by libvirtd. I believe these are irrelevant as the routing table of the server is correct. I have also tried stopping all VM's and associated networks in libvirt. With no luck.

I have run # iptables -L this produced a list with 197 lines, the rules were correct. To reach the 'home' zone, an incoming packet has to be compared agaist 12 rules. Then the packet will be compared against the rules defined in the zone, there are 6 rules in the 'home' zone. There are no rules filtering the OUTPUT chain. I have ruled out iptables.

I don't have the slightest clue what should I try next?

Axuttaja

Posted 2017-10-27T19:41:20.557

Reputation: 1

1

I don't understand why you need both IPv4 & 6 in an internal network. If you intend to keep both, this answer might help.

– harrymc – 2017-10-27T19:54:23.840

I tried disabling IPv6, this didn't solve the issue. – Axuttaja – 2017-10-27T20:28:12.660

Keep IPv6 disabled on all devices - one variable less. It would help to add a plan and a detailed description of your local network including IP addresses & masks, gateways, DNS, DHCP firewall and any setting that you might have changed. Try also to disable all firewalls except the one facing the Internet (they are really unnecessary). – harrymc – 2017-10-28T07:28:18.667

So when you access 192.168.1.0/24 through a firewall, you get less performance than when you access the same physical LAN segment through IPv6 without a firewall. In other words, the firewall (however it is set up) limits performance. What exactly is surprising about that? – dirkt – 2017-10-28T09:29:34.720

@dirkt: This question is as clear as mud; so much so that I can’t understand why it hasn’t been DV’ed or VTC’ed.  But, in defense of the OP, they say that turning off the firewall didn’t change the result.  Also, it seems almost as if they’re saying that they get better performance when talking to the Internet than they do when talking to computers in the same room.  But, as I say, the question is hard to read. – G-Man Says 'Reinstate Monica' – 2017-10-28T17:06:26.487

@dirkt: I do not have a firewall between the server and the 192.168.1.0/24 network, the only firewalls in my setup are the windows firewall, and linux's iptables which is controlled by firewalld. The server has the IP 192.168.1.40. The iperf server and client are not only in the same network, but directly connected to the same switch. – Axuttaja – 2017-10-28T19:38:01.780

Please edit your question and explain which hosts you have, which network segements they belong to, which addresses their interfaces have, and details about the firewall employed. If for example your server uses iptables by rules generated from firewalld on eth0 with an address in 192.168.1.0/24, then yes, these rules can cause slowdown. So the next obvious step is to look at the rules with iptables. There's just not enough information in your question to make sense of your setup. – dirkt – 2017-10-28T19:51:43.460

I have now updated the question. I also ran # iptables -L this produced 197 lines. The rules created by firewalld and libvirtd were correct and firewalld had played some neat optimizations on the rules. – Axuttaja – 2017-10-28T20:58:19.753

Answers

0

I have now solved this issue.

I deleted the eth0 connection in NetworkManager:

# nmcli connection delete id eth0

And then created an equivalent substitute, now everything works as expected.

Axuttaja

Posted 2017-10-27T19:41:20.557

Reputation: 1