3

I've been testing to see if I can get any benefit from enabling jumbo frames. I've set up two identical Dell R210 servers with Xeon quad core E3122 CPUs, 8G of RAM and Broadcom NetXtreme II BCM5716 Gigabit Ethernet cards. I'm running Debian Squeeze with the bnx2 network driver on both systems. The servers are wired back to back on one NIC each on a private subnet, and I'm using the other NIC on both for SSH and monitoring. I've added the OS tuning parameters I'm aware of:

sysctl -w net.core.rmem_max=134217728             
sysctl -w net.core.wmem_max=134217728             
sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728"  
sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728"  
sysctl -w net.core.netdev_max_backlog=300000  
sysctl -w net.ipv4.tcp_sack=0  
sysctl -w net.ipv4.tcp_fin_timeout=15  
sysctl -w net.ipv4.tcp_timestamps=0    
ifconfig ethX txqueuelen 300000  
ethtool -K eth1 gso on

Ethtool -k output shows

rx-checksumming: on  
tx-checksumming: on
scatter-gather: on   
tcp-segmentation-offload: on  
udp-fragmentation-offload: off  
generic-segmentation-offload: on  
generic-receive-offload: on  
large-receive-offload: off  
ntuple-filters: off  
receive-hashing: off

Both servers are configured for 9000 byte jumbo frames via ifconfig (sudo /sbin/ifconfig eth1 mtu 9000) and I confirmed the MTUs on both systems via ping (ping -s 8972 -M do <other IP>). When I test bulk transfers with netperf, tcpdump confirms that the majority of data packets use the full MTU of 9000 bytes, with a frame size of 9014.

However when I test with a "real" application - I set Postgres up on one server, and used the other as a client, the maximum MTU reported by tcpdump and tshark is 2160, even for very large selects with result sets running into megabytes. I'm not able to get it to go higher, despite having tried quackery like setting advmss on the route using iproute2.

Thoughts?

TIA.

Nathan C
  • 14,901
  • 4
  • 42
  • 62
sevenr
  • 133
  • 1
  • 4

1 Answers1

2

Postgres may not be the best "real" application to fully pack jumbo frames. Based on an old list thread, it looks like the developers have tried to improve performance using TCP_NODELAY and/or TCP_CORK (disabling the Nagle algorithm).

Try using a different application, such as...

  • HTTP (webserver on one host, pull large files on other server)
  • NFS (mount with "rsize=8192,wsize=8192")
  • Expose your postgres data using SOAP
  • Try MySQL, DB2 Express, Oracle XE, Sybase Anywhere (with 4k packets or larger). If some other database fills jumbo packets better with the same tables, data and queries, file a bug report about in with the Postgres developers.
david
  • 263
  • 1
  • 11