22

We have dozens of embedded devices installed at customers, all calling home to our OpenVPN service. That works fine in general, but a few of our customers have severe path MTU issues. Our influence on the customers to fix their networks is limited, so we need OpenVPN to deal with it. In a nutshell, my question is:

How can i mitigate the low path MTUs of some clients on a per-client bases, that is without using global settings accommodating the worst case for all clients

Note that our worst case it pretty bad: path MTU 576, drops all fragments, doesn't fragment itself, doesn't honour DF-bit. You see why i'd prefer to not solve this issue globally.

The OpenVPN manpage offers a number of MTU related options, most notably --link-mtu, --tun-mtu, --fragment and --mssfix. But it also says

--link-mtu [...] It's best not to set this parameter unless you know what you're doing.

--tun-mtu [...] It's best to use the --fragment and/or --mssfix options to deal with MTU sizing issues.

So i started experimenting with --fragment and --mssfix but soon had to realize that at least the former must be set not only client-side, but also server-side. I then looked into server-side per-client config via --client-config-dir but it says

The following options are legal in a client-specific context: --push, --push-reset, --iroute, --ifconfig-push, and --config.

No mention of MTU options!

So here are my more specific questions:

  • Why exactly are link-mtu and tun-mtu discouraged? What are the potential problems with these options? Note that i am quite comfortable with low-level IP header munging.
  • Which of the options link-mtu tun-mtu fragment mssfix have to be mirrored on the server-side in order to work?
  • Which of the options link-mtu tun-mtu fragment mssfix can be used in client-config-dir?
  • In case all four options have to be mirrored server-side, and cannot be used inside client-config-dir: Are there any alternatives to combat low path MTU per client?

Notes:

  • Parts of my questions have already been asked 5 years ago here, But they haven't really been answered back then, hence i dare to duplicate them.
  • The OpenVPN server is currently 2.2.1 on Ubuntu 12.04. We are preparing an upgrade to 2.3.2 on Ubuntu 14.04
  • The OpenVPN clients are 2.2.1 on Debian 7.6
  • I am happy to determine a customer's path-MTU myself manually
  • Currently we cannot test much server-side. But we are building a complete separate test bed, should be ready soon.

I am thankful for any helpful advise.

Nils Toedtmann
  • 3,202
  • 5
  • 25
  • 36
  • 2
    576? Dear gawd. I haven't seen an MTU that low since the days of dialup. Is that going over an ancient serial link? – Michael Hampton Sep 13 '14 at 14:44
  • Could you run two OpenVPN servers? Maybe you could run both servers on the same public IP address and use port forwarding (or a routing policy) to direct clients to a different OpenVPN server depending on whether they are on a known problematic network or not (as determined by a list of client IP addresses). – kasperd Sep 13 '14 at 14:52
  • 1
    @MichaelHampton I wondered too. It's >600kbit/s and RTT ~30ms, doesn't look like ancient serial to me. Given that they have other stupid settings (e.g. not responding to DF with 'fragmentation needed'), i guess this is just another one. We told them, but haven't heard back yet. – Nils Toedtmann Sep 13 '14 at 14:54
  • @kasperd interesting idea. I could run multiple OpenVPN server instances. Would have to have maybe 3 or 4, for different MTU ranges. Server-side per-client NAT would not work (i cannot predict the dynamic public client IP addresses), but i would have to alter the client config anyway for the MTU settings (correct?), so i would simply configure the different port straight into the client. - But it would be a maintenance nightmare that i would prefer to avoid! – Nils Toedtmann Sep 13 '14 at 15:02
  • @NilsToedtmann Which criteria would you use to detect which clients are affected? One other approach could be to run a script on the server after a client has connected. The script can try to ping the client IP address with varying packet sizes to figure out which work and which do not. Then it can insert `iptables` rules to reduce the MSS on all SYN packets to or from that client IP address. – kasperd Sep 13 '14 at 15:09
  • @kasperd OpenVPN has it's own `--mtu-test`. Maunally i use `hping3 $TEST_TARGET --icmp --data $((TEST_MTU-28)) --file /dev/urandom` to test which packet size passes. I found that normal ping with non-random payload is a bad test on uplinks with packet compression – Nils Toedtmann Sep 13 '14 at 15:40
  • @kasperd i am not worried about finding out the path MTU of a client, let's assume i know that for each client. I worry about how to deal with a low MTU. Interesting idea to script server-side a per-client `iptables --set-mss`. Covers only TCP, but that happens to cover 90% of my use case. – Nils Toedtmann Sep 13 '14 at 15:44
  • Another idea: Affected clients use OpenVPN via TCP, and have a `iptables --set-mss` rule on their outgoing interface. Of course I'd rather use UDP, but this would solve my problem with minimal reconfiguration - correct? – Nils Toedtmann Sep 13 '14 at 15:48
  • @NilsToedtmann Running VPN over TCP comes with its own set of problems. I wouldn't recommend it. It is true that `--set-mss` only covers TCP. But many users never need large packets over other protocols. In principle you should be able to bounce large packets entering the tunnel from your end with an ICMP error. But I couldn't find any `iptables` target for that purpose. If you could find an `iptables` module doing this, then the only issue left would be large non-TCP packets from the client side. – kasperd Sep 13 '14 at 16:13

3 Answers3

8

I solved the problem on the client side by adding the option mssfix 1300 to the config file.

From the openvpn man page:

--mssfix max
    Announce to TCP sessions running over the tunnel that they should limit their send packet sizes such that after OpenVPN has encapsulated them, the resulting UDP packet size that OpenVPN sends to its peer will not exceed max bytes. 

Original idea for my solution came from personalvpn.org

oz123
  • 1,198
  • 5
  • 16
  • 32
  • 1
    So `mssfix` can be set client-side only? Well, that's something at least. It doesn't help with UDP packets though (which is why i was interested in the other options, but at least the recommended `fragment` needs to be set server-side too) – Nils Toedtmann Dec 22 '14 at 12:44
  • 5
    mssfix can be added on server as well as client. However the smaller value will be used in negotiation – Ahmed Jul 25 '16 at 22:45
3

Not sure if this can help with embedded devices, but I'm sharing my experience in case it can help others.

TL;DR: skip to the last question/answer for a possible solution


  • Why exactly are link-mtu and tun-mtu discouraged? What are the potential problems with these options? Note that i am quite comfortable with low-level IP header munging.

In my experience as user of OpenVPN, for most common cases it is actually the fragment and/or mssfix values that are better not be fiddled with, i.e. contrary to what the documentation states.

link-mtu and tun-mtu values are computed related to each other depending on which cipher (if any) is used for the tunnel. By default tun-mtu is meant to be the classic 1500 bytes while link-mtu is derived on that, but you may set either one and thus have the other recomputed accordingly. In this regard, one quite-certain rule of thumb is to never set both: only set either one, because the other will just follow suit.

The only case where I did run into problems while altering them is when I set the endpoints to have very different values and with the server being the one having a very low value, like 576 on it while 1500 on the client. These set-ups simply do not handshake at all, and I have never investigated further because, frankly, I've only had such set-ups in artificial lab-tests only. The opposite case, i.e. very low value on client while normal/high value on server, works just fine.

In fact, I have found that setting either link-mtu or tun-mtu (I mostly prefer setting link-mtu) actually solves most common cases just neatly, i.e. the tunnel works just as it should including any non-TCP traffic.

Of course, setting fragment and/or mssfix values (if those are also explicitly set) must take the explicit link-mtu/tun-mtu values into account, but I found that just leaving them alone (i.e. not even mentioned in the configuration) is usually the best thing to do because they simply adjust automatically according to the link-mtu/tun-mtu values.

That said, it is true that setting only mssfix (thus leaving link-mtu/tun-mtu as well as fragment unspecified) does solve everything related to TCP, but all other protocols are bound to experience problems, the only reason why they usually don't is simply because most non-TCP traffic is either DNS on UDP, which is typically low sized, or ICMP echo-requests/replies, which are again small packets by default.

Note however that I'm referring to OpenVPN on Linux; on other OSes, particularly the Windows series and iOS devices, it might be a different story, i.e. altering link-mtu/tun-mtu might be irrelevant or even disruptive and you might have to resort to altering fragment/mssfix as suggested by the documentation. Particularly hostile cases may be when either the server or the clients (or both) do not support IP fragmentation/reassembly that may be triggered by lowering link-mtu/tun-mtu artificially, and thus in such cases you can only resort to enable fragment in OpenVPN.


  • Which of the options link-mtu tun-mtu fragment mssfix have to be mirrored on the server-side in order to work?

According to the warning messages issued by OpenVPN to its log when you set either link-mtu/tun-mtu while not mirroring them to the endpoint, the link-mtu and tun-mtu values should be exactly mirrored, either explicitly or implicitly. However, in real use-cases I have never experienced problems in altering either value even when I ended up with very different values between the tunnel endpoints.

fragment must either be present on both sides or absent on both sides, because its presence enables fragmentation performed by OpenVPN itself using its own internal algorithm (not IP fragmentation), and as such it must be agreed upon by both endpoints. However, when it is present, it may have asymmetric values.

mssfix does not need to be mirrored, and it may also have asymmetric values between the endpoints.


  • Which of the options link-mtu tun-mtu fragment mssfix can be used in client-config-dir?

None


A possible solution

  • In case all four options have to be mirrored server-side, and cannot be used inside client-config-dir: Are there any alternatives to combat low path MTU per client?

One way is by using Linux's per-route settings, which allows setting MTU values arbitrarily, up to the MTU of the actual interface. It goes as follows:

For starters, perform Path MTU Discovery manually (e.g. using ping -M do -s xxxx <client's-*public*-address> commands from the server towards the remote clients) in order to find out the correct MTU value between the server and each specific remote client. Then,

on the client side, assuming it is just a client and not a router for other hosts in its LAN, just set OpenVPN's link-mtu value to the actual MTU minus 28. For instance, on a remote client having an actual MTU of 576 an link-mtu 548 will do the trick.

Note that it will do for all traffic (not just TCP) originating from the client, because the link-mtu value is used by OpenVPN as an upper-bound size for its own (UDP port 1194) payloads sent to the remote endpoint, hence the remote client's actual MTU (as discovered manually) minus 20 (outer IP header size without options) minus 8 (outer UDP header size).

The MSS value that OpenVPN will clamp automatically to in-tunnel TCP traffic will then be link-mtu minus OpenVPN's own overhead (which varies depending on cipher used), minus 20 (inner IP header size without options) minus 20 (inner TCP header size without options).

on the server side, install one route per each "low-MTU" client, in order to set the correct MTU for each of them. Each correct value to set on the per-route settings has to be what you set on that remote client's link-mtu value (as determined previously) minus OpenVPN's own overhead. This latter can be derived from the link-mtu value minus the tun-mtu value reported by the OpenVPN server for that specific tunnel. For instance, assuming the OpenVPN server reports a link-mtu of 1541 and a tun-mtu of 1500, then on the machine hosting your OpenVPN server you would do something like:

ip route add <client's-*vpn*-address> via <your-openvpn-server's-*vpn*-address> mtu 507

Such operation can be done conveniently on-demand by a client-connect script, which receives those values (as well as many others) in environment variables set dynamically by OpenVPN. In shell-script parlance it would be:

ip route replace "$ifconfig_pool_remote_ip" via "$ifconfig_local" mtu "$(( 576 - 28 - (link_mtu - tun_mtu) ))"

From then on, outbound traffic towards that specific client's VPN address will respect that MTU instead of the tun interface's one.

Furthemore, if your OpenVPN server also acts as router (sysctl net.ipv4.ip_forward=1) between its LAN and the remote clients, then the per-route settings applied to the OpenVPN server machine (its Linux kernel) will also trigger proper ICMP messages (type 3 code 4, Fragmentation Needed) towards the machines in LAN when these send DF traffic towards the remote clients, in which case these other machines in LAN must comply to those ICMP messages, and it will also perform IP fragmentation inside the tunnel on behalf of the machines in LAN when these send non-DF traffic, in which case your remote clients must support IP reassembly for the IP fragments coming out of the tunnel.

Note also that on Ubuntu 14.04 HWE and newer (or equivalent kernels up to v5.7.x) you will also have to set sysctl net.ipv4.ip_forward_use_pmtu=1 in order to have Linux perform said ICMP/fragmentation when forwarding other machines's traffic, whereas that additional sysctl is not required for outbound traffic originated by the server machine itself.

Finally note that for a fully correct configuration you should also set link-mtu 1472 (assuming an underlying interface's MTU of 1500) on the server side. I actually do it any time, any where, as a base configuration, except for peculiar cases requiring specific workarounds. This is because OpenVPN does not consider the underlying interface's MTU as a starting value for its link-mtu/tun-mtu, nor does it perform PMTU discovery even when its mtu-disc option is set to yes on OSes that support it. Therefore, explicitly setting link-mtu to a value reflecting the underlying interface's MTU (using the formulae I described above) is to me the true best serious default at least on Linux.

HTH

LL3
  • 131
  • 2
  • 1
    Thank you for such a thoughtful response. It was very informational; I followed your recommendation and fixed my MTU issues. – P Shved Apr 27 '21 at 15:42
3

Given the lack of answers, i am posting now a solution that is not very elegant, but simple: Run another OpenVPN instance on TCP for "bad clients"

proto tcp

and lower the TCP MSS on the client, e.g.

iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o ${OUT_DEV} -j TCPMSS --set-mss ${PATH-MTU-MINUS-40}

An advantage of this solution is that each client can set its individual MSS.

This is admittedly TCP-over-TCP, but that should perform well enough in many scenarios.

Note that I am still very interested solutions that don't require proto tcp, and i'll mark them as valid answer if they more or less fulfil my outlined requirements.

Nils Toedtmann
  • 3,202
  • 5
  • 25
  • 36