I've been testing to see if I can get any benefit from enabling jumbo frames. I've set up two identical Dell R210 servers with Xeon quad core E3122 CPUs, 8G of RAM and Broadcom NetXtreme II BCM5716 Gigabit Ethernet cards. I'm running Debian Squeeze with the bnx2 network driver on both systems. The servers are wired back to back on one NIC each on a private subnet, and I'm using the other NIC on both for SSH and monitoring. I've added the OS tuning parameters I'm aware of:
sysctl -w net.core.rmem_max=134217728
sysctl -w net.core.wmem_max=134217728
sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728"
sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728"
sysctl -w net.core.netdev_max_backlog=300000
sysctl -w net.ipv4.tcp_sack=0
sysctl -w net.ipv4.tcp_fin_timeout=15
sysctl -w net.ipv4.tcp_timestamps=0
ifconfig ethX txqueuelen 300000
ethtool -K eth1 gso on
Ethtool -k
output shows
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
ntuple-filters: off
receive-hashing: off
Both servers are configured for 9000 byte jumbo frames via ifconfig (sudo /sbin/ifconfig eth1 mtu 9000
) and I confirmed the MTUs on both systems via ping (ping -s 8972 -M do <other IP>
). When I test bulk transfers with netperf, tcpdump confirms that the majority of data packets use the full MTU of 9000 bytes, with a frame size of 9014.
However when I test with a "real" application - I set Postgres up on one server, and used the other as a client, the maximum MTU reported by tcpdump and tshark is 2160, even for very large selects with result sets running into megabytes. I'm not able to get it to go higher, despite having tried quackery like setting advmss on the route using iproute2.
Thoughts?
TIA.