5

We have a dataload job that moves a relatively large amount of data across the network between two sql servers. the servers are on the same subnet and there is only a switch between them. the data consists of several large varchar fields plus an xml field.

In order to increase throughput. I have tried changing the network packet size from the default 4096 to 32627 on the connection string; However it doesn't seem to be helping performance. I suspect the issue is that although we a running gig ethernet, "jumbo frames" are not enabled.

To confirm this, I tried two ping tests:

ping -l 1400 -f pdbsql01dul

Works

ping -l 4096 -f pdbsql01dul

Packet needs to be fragmented but DF set.

as you can see the largest packet size appears to be around 1400

My question is, if Jumbo frames are ~8096, is there any benefit of setting the network packet size larger than that?

Does this change if the connection is local to the server in question?

Jason Horner
  • 612
  • 2
  • 6
  • 13

3 Answers3

8

What needs to happen is that the MTU setting on the ethernet network needs to be increased from 1500 to something north of 4096. These settings are typically set on the Driver settings page. For good networking you really want all devices (including all ethernet switches) on the same ethernet to have the same MTU setting.

Jumbo Frame setting on one of my servers
(source: sysadmin1138.net)

That's where you'd change it on one of my servers.

Can it help? It certainly can. Less packet fragmentation means less work on the TCP stack to reassemble the traffic stream. It may not be orders of magnitude, but it could help.

Connections local to the server use, I believe, pipes rather than TCP connections and are probably unaffected by this change.

Glorfindel
  • 1,213
  • 3
  • 15
  • 22
sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
4

You can try, but I doubt it will help much. TDS as a protocol was never designed for high troughput. If you want to move data between two SQL server instances you may consider using Service Broker instead, its network stack is much more oriented toward high troughput than the TDS one. This is why Mirroring choose the SSB network stack to communicate with the standby mirror servers. Besides the data movement semantics of SSB are much better than linked servers and usually better than custom client apps.

Remus Rusanu
  • 8,253
  • 1
  • 19
  • 22
2

I can't comment on tcp, frames etc, but I've only set SQL Sever network packet size once, ever, for some vile app that still needed SQL 6.5 client tools.

It's one of those "don't do it" settings.

gbn
  • 6,009
  • 1
  • 17
  • 21