3

I have set up an IPsec over GRE connection with a remote host, both are NetBSD 6.1 based. The "client" is connected to the Internet through a 400Mbps fiber connection. The "server" is located on a 10Gbps network. Both machines have 1Gbps NICs which behave perfectly, meaning they both reach the link speed limit when transferring data outside the IPsec tunnel. When doing a transfer through the tunnel, speed drops at a factor 5 to 10:

direct connection: /dev/null 27%[====> ] 503.19M 45.3MB/s eta 83s IPsec connection: /dev/null 2%[ ] 47.76M 6.05MB/s eta 5m 3s

The tunnel is setup this way:

On the server, which is a NetBSD domU running on a debian/amd64 dom0:

$ cat /etc/ifconfig.xennet0
# server interface
up
inet 192.168.1.2 netmask 255.255.255.0
inet 172.16.1.1 netmask 0xfffffffc alias
$ cat /etc/ifconfig.gre0 
create
tunnel 172.16.1.1 172.16.1.2 up
inet 172.16.1.5 172.16.1.6 netmask 255.255.255.252

IPsec traffic is forwarded from dom0's public IP to the domU's xennet0 interfacethrough iptables NAT rules:

-A PREROUTING -i eth0 -p udp -m udp --dport 500 -j DNAT --to-destination 192.168.1.2:500 
-A PREROUTING -i eth0 -p esp -j DNAT --to-destination 192.168.1.2 
-A PREROUTING -i eth0 -p ah -j DNAT --to-destination 192.168.1.2

On the client:

$ cat /etc/ifconfig.vlan8 
# client public interface
create
vlan 8 vlanif re0
!dhcpcd -i $int
inet 172.16.1.2 netmask 0xfffffffc alias
$ cat /etc/ifconfig.gre1 
create
tunnel 172.16.1.2 172.16.1.1 up
inet 172.16.1.6 172.16.1.5 netmask 255.255.255.252

On racoon's side, I tried various hash / encryption algorithms combinations, even enc_null, but nothing changes really, transfer is still stuck at a 6MB/s max.

remote node.public.ip {
     exchange_mode main;
     lifetime time 28800 seconds;
     proposal {
         encryption_algorithm blowfish;
         hash_algorithm sha1;
         authentication_method pre_shared_key;
         dh_group 2;
     }
     generate_policy off;
}

sainfo address 172.16.1.1/30 any address 172.16.1.2/30 any {
     pfs_group 2;
     encryption_algorithm blowfish;
     authentication_algorithm hmac_sha1;
     compression_algorithm deflate;
     lifetime time 3600 seconds;
}

On the client:

remote office.public.ip {
     exchange_mode main;
     lifetime time 28800 seconds;
     proposal {
         encryption_algorithm blowfish;
         hash_algorithm sha1;
         authentication_method pre_shared_key;
         dh_group 2;
     }
     generate_policy off;
}

sainfo address 172.16.1.2/30 any address 172.16.1.1/30 any {
     pfs_group 2;
     encryption_algorithm blowfish;
     authentication_algorithm hmac_sha1;
     compression_algorithm deflate;
     lifetime time 3600 seconds;
}

The tunnel establishes with no issue, the only problem here is transfer drop. Again, when transferring from / to the server from / to the client without tunnel, speed is optimal, drop occurs only through IPsec.

Both machines are intel-based CPUs running at 2+GHz, plenty of memory and very little CPU time consumed by anything else than forwarding / NAT.

Has anyone witnessed such a behaviour? Any idea on where to look further?

Thanks,

iMil
  • 251
  • 1
  • 9

1 Answers1

2

If you didn't tweak MTU, you may have post-fragmentation issues (fragmentation occuring after encryption) that is well explained in this documentation: http://www.cisco.com/c/en/us/td/docs/interfaces_modules/services_modules/vspa/configuration/guide/ivmsw_book/ivmvpnb.html#wp2047965

You should try to reduce MTU inside the tunnel by 82 bytes (GRE + IPSec headers).

Fanu
  • 21
  • 2
  • Yes I did that, even forced MSS with `iptables` on the server side, but no matter what, I get a massive speed drop. FWIW, I lowered involved MTUs at values as low as 1360 without success. – iMil Jul 14 '15 at 16:26