Application Optimized Routing on a load balancing router?

5

3

Ive got 2 incoming broadband lines both c.10mb down / c.0.8 up. (one line is slightly faster than the other)

Ive recently setup a TP Link - TL-R470T+ Load balancing router after researching it and watching THIS VIDEO. I run the 2 seperate adsl lines into my 2 separate modems, from there i run 2 cat5s from the modems to the load balancer and then 1 cat5 from the load balancer to a wifi router, all devices connect via the wifi router.

In the video they talk about disabling 'Enable Application Optimized Routing' - if i run a speedtest with it enabled i basically just get the speed result of the faster broadband line. If i disable the above setting and run a speedtest i get the combined speed of both lines.

Next to the option it says the below comment :

Enable Application Optimized Routing With this box checked, all the data packets of the same network application on multi-connections will be forwarded via the same WAN ports, which avoids abnormity caused by forwarding the data packets of this application via different WAN ports.

What does that mean exactly ? - I use quite a few 'live syncing sites' like Google Drive and Trello which us a mixture of sockets, node js and long polling to stream data back and forth over a continuous connection would these services be effected ?

I also use a cloud backup service on a few machines would something like this be effected ?

I understand that if i have this setting enabled i can still get the automatic switching benefit of both lines, but not use both of them at the same time. What sort of issues / which sort of services will / colud run into issues if i leave this option unchecked ?

sam

Posted 2014-04-13T12:47:34.703

Reputation: 3 411

Answers

2

With respect to IPv4:

A TCP connection (not UDP, not Multicast, etc) upon which sessions applications establish to conduct transactions and present content is between one and only one source IP:port, and one and only one destination IP:port. The protocol does not permit one-to-many connections for a single session, as far as the public Internet is concerned. Due to the stateful nature of TCP, while it may be possible to have several private hosts conduct parts of a single session, brokered by a load-balancer, it is likely not practical.

The route between these two IP:port hosts may be infinitely dynamic, insofar as neither host runs out of resources or exceeds any timers. This includes gracefully handling out-of-sequence packets, as long as no limits, hard or soft, are exceeded.

This means that in order to load-balance a session over two separate links in the outbound direction, both paths must be able to forward traffic from the same source IP to the same destination IP.

When the two links belong to the same ISP, this is usually not a problem, unless there are strict source IP filters (explicit or implicit) on each connection. In fact, if no specific restrictions, one can balance in the outbound direction over two separate links without any assistance from the ISP.

Not so for load-balancing the inbound traffic, however. The ISP almost always has to step in to enable load-balancing in the inbound direction.

Lets assume the ISP is onboard with implementing load balancing for you:

One of the easiest ways to accomplish this is to assign you your own subnet, apart from the usual networks served by the DSLAM. This subnet could be as small as a single /32 host, or, for an office, perhaps even several hundred hosts.

For reliable load balancing between two IP links and customer premise equipment (CPE), the load balancer ought to have at least 3 separate interfaces, and the two ISP-facing interfaces ought to belong to two different networks, to eliminate any ambiguous routing or switching decisions

Say one of your ISP-facing load-balancer interfaces is 10.2.2.2/30, the other 10.2.2.254/30. Your CPE network is 65.172.1.0/24 and the load balancer's CPE-facing interface is 65.172.1.1.

Your load-balacner would have to do some form of the following:

ip route 0.0.0.0 0.0.0.0 10.2.2.1
ip route 0.0.0.0 0.0.0.0 10.2.2.253

This creates two static default routes of equal priority to each connection to the ISP.

On a cisco router behaving as a load balancer, the default method was to load-balance per destination, as the way route-cache flow works, it's less work for the router. However, there was the option

ip load-sharing per-packet

which would forward traffic that has more than one equivalent route, in a round-robin out both interfaces.

ip load-sharing per-destination

sets it back to its default scheme.

This setup would load-balance your outbound connections.

Your ISP would have to configure these two static routes on their device, with the same per-packet or per-destination option, most likely the former:

ip route 65.172.1.0 255.255.255.0 10.2.2.2
ip route 65.172.1.0 255.255.255.0 10.2.2.254

If set up properly on both sides, both your load balancer's WAN interfaces ought to report the same packet-per-second received, and the same packet-per-second transmitted statistics.

The features you inquire about are very similar to per-packet load-sharing and per-destination. However, if its the same ISP, you can safely leave it on per-packet; The 'optimized' option is more for those load-balancing two connections to different providers. Note that changing this option only affects your outbound traffic, and has no effect on inbound.

It's quite unlikely that you'll be able to implement a two-way load-balanced connection without help (and likely a fee) from your ISP. Your ISP ought to be able to advise you on settings that suit your situation.

However, it is in my opinion, given what I know about your network design, that there will be any noticeable trouble with per-packet.

Nevin Williams

Posted 2014-04-13T12:47:34.703

Reputation: 3 725

3

So you have 2 broadband lines - it means you have 2 WAN ports. When you enable load balancing on two or more WAN ports IP packets generated with your applications will pass through the first available WAN port, or using Round Robin algorithm, or may be more complicated algorithm supported by hardware and firmware of your load balancer.

That means sometimes remote servers can get IP packets from your applications with different source addresses. Some times it may be confusing or treated as Man-in-the-middle attack. That's why most encrypted connections like HTTPS, TLS and SSL encrypted VPN, maybe some online games will not work or may misbehave with such connection type.

Application Optimized Routing means that all IP packets from one application will pass only through one WAN port (that's why you saw "speed result of the faster broadband line" with speedtest). If you run more then one application, for example two browsers with two speedtests Application Optimized Routing should use one WAN for your firs application and another WAN for second application and so on. In conclusion if you eanble Application Optimized Routing you will be able to use both connections, but connection speed for certain application will be equal to connection speed of one broadband line (c.10mb down / c.0.8 up)

AlexAndersan

Posted 2014-04-13T12:47:34.703

Reputation: 491

2

The TCP protocol expects packets to arrive in sequential order. If you use two or more WAN interfaces, then it is possible that packets will arrive on the far end out of order. This can cause the overall throughput to drop due to the recovery action that will then be unnecessarily taken by the receiver.

In your case, it would for example mean that if you "bond" two 5 Mbps WAN interfaces over TCP, the total throughput may be somewhat less than the theoretical maximum of 10 Mbps. The solution to dropped packets is to bond one given connection to only one WAN for the duration of the connection.

The TL-R480T User Guide explains this option (even though this is not your model) :

With the box before Enable Application Optimized Routing checked, the Router will consider the source IP address and destination IP address of the packets as a whole and record the WAN port they pass through. And then the packets with the same source IP address and destination IP address or destination port will be forwarded to the recorded WAN port. This feature is to ensure the multi-connected applications to work properly.

The term IP address merits some analysis, since your computer is not on the Internet. It is rather the router that is connected to the Internet, while your computer is a member in your local network that is entirely separate from the Internet. Therefore, the IP address of your computer will probably look similar to 192.168.0.xxx while the IP address of the website you are connecting-to is entirely different.

In the above text therefore "source IP address" means the address of your computer inside the internal network, while "destination IP address" will be on the Internet. The result would be that even if you opened multiple connections from your computer, all to the same website, the router will consider all of them as being between the same IP addresses.

Thus, the explanation for your tests is :

  • With Enable Application Optimized Routing on, the router used only one WAN, so the measured speed was that of this one WAN. The router could also be intelligent enough to prefer the faster WAN over the slower one.
  • With Enable Application Optimized Routing off, the router used both WANs, so the total speed was the sum of the bandwidths of both.

For the second case, since the observed total bandwidth was about equal to the sum of the bandwidths, this means that the protocol used by speedtest is not very sensitive to out-of-order packets.

harrymc

Posted 2014-04-13T12:47:34.703

Reputation: 306 093

"The TCP protocol expects packets to arrive in sequential order." I don't believe that is correct. TCP was designed to contain sequence numbers so that out-of-order packets can be reassembled into the correct order. I think the real issue with multi-WAN setups is more that a TCP connection is a contract between only two IP addresses. If packets arrive from a third address that hasn't yet opened a connection, these packets will most likely be dropped. I'm actually baffled by how these TP-Link devices can download a single file over SpeedTest.net & have it split the stream over both WAN links. – Simon East – 2016-09-08T01:46:38.073

@SimonEast: How can TCP packets arrive out of order when the sender will wait for ack on each one? – harrymc – 2016-09-08T04:45:25.267

This can happen even over a single connection to the internet because of TCP Windowing (read more here) where multiple packets are sent before waiting for acknowledgement. Since those packets could actually take different routes to their destination, the packets can arrive out of order. The networking stack is supposed to handle this reordering gracefully before passing the data to the application.

– Simon East – 2016-09-08T05:19:06.340

I think there is misunderstanding between packet and frame. A packet may be larger than an Ethernet frame. – harrymc – 2016-09-08T06:36:36.710

No, my point was that TCP does not expect packets to always arrive in sequential order. It has specific mechanisms to handle out of order packets. As it says on Wikipedia, *"due to congestion, load balancing, or other network issues, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order data ... if the data still remains undelivered, its source is notified of this failure."* Your answer didn't acknowledge this.

– Simon East – 2016-09-09T00:02:29.910

@SimonEast: Bare TCP cannot acknowledge out-of-order packets. Read the original decision from 1988. Reassembling packets is possible within the receive window via Selective acknowledgments, operating only if both parties support it, negotiated when the connection is established. Protocol explained here.

– harrymc – 2016-09-09T08:00:09.083