With respect to IPv4:
A TCP connection (not UDP, not Multicast, etc) upon which sessions applications establish to conduct transactions and present content is between one and only one source IP:port, and one and only one destination IP:port. The protocol does not permit one-to-many connections for a single session, as far as the public Internet is concerned. Due to the stateful nature of TCP, while it may be possible to have several private hosts conduct parts of a single session, brokered by a load-balancer, it is likely not practical.
The route between these two IP:port hosts may be infinitely dynamic, insofar as neither host runs out of resources or exceeds any timers. This includes gracefully handling out-of-sequence packets, as long as no limits, hard or soft, are exceeded.
This means that in order to load-balance a session over two separate links in the outbound direction, both paths must be able to forward traffic from the same source IP to the same destination IP.
When the two links belong to the same ISP, this is usually not a problem, unless there are strict source IP filters (explicit or implicit) on each connection. In fact, if no specific restrictions, one can balance in the outbound direction over two separate links without any assistance from the ISP.
Not so for load-balancing the inbound traffic, however. The ISP almost always has to step in to enable load-balancing in the inbound direction.
Lets assume the ISP is onboard with implementing load balancing for you:
One of the easiest ways to accomplish this is to assign you your own subnet, apart from the usual networks served by the DSLAM. This subnet could be as small as a single /32 host, or, for an office, perhaps even several hundred hosts.
For reliable load balancing between two IP links and customer premise equipment (CPE), the load balancer ought to have at least 3 separate interfaces, and the two ISP-facing interfaces ought to belong to two different networks, to eliminate any ambiguous routing or switching decisions
Say one of your ISP-facing load-balancer interfaces is 10.2.2.2/30, the other 10.2.2.254/30. Your CPE network is 65.172.1.0/24 and the load balancer's CPE-facing interface is 65.172.1.1.
Your load-balacner would have to do some form of the following:
ip route 0.0.0.0 0.0.0.0 10.2.2.1
ip route 0.0.0.0 0.0.0.0 10.2.2.253
This creates two static default routes of equal priority to each connection to the ISP.
On a cisco router behaving as a load balancer, the default method was to load-balance per destination, as the way route-cache flow works, it's less work for the router. However, there was the option
ip load-sharing per-packet
which would forward traffic that has more than one equivalent route, in a round-robin out both interfaces.
ip load-sharing per-destination
sets it back to its default scheme.
This setup would load-balance your outbound connections.
Your ISP would have to configure these two static routes on their device, with the same per-packet or per-destination option, most likely the former:
ip route 65.172.1.0 255.255.255.0 10.2.2.2
ip route 65.172.1.0 255.255.255.0 10.2.2.254
If set up properly on both sides, both your load balancer's WAN interfaces ought to report the same packet-per-second received, and the same packet-per-second transmitted statistics.
The features you inquire about are very similar to per-packet load-sharing and per-destination. However, if its the same ISP, you can safely leave it on per-packet; The 'optimized' option is more for those load-balancing two connections to different providers. Note that changing this option only affects your outbound traffic, and has no effect on inbound.
It's quite unlikely that you'll be able to implement a two-way load-balanced connection without help (and likely a fee) from your ISP. Your ISP ought to be able to advise you on settings that suit your situation.
However, it is in my opinion, given what I know about your network design, that there will be any noticeable trouble with per-packet.
"The TCP protocol expects packets to arrive in sequential order." I don't believe that is correct. TCP was designed to contain sequence numbers so that out-of-order packets can be reassembled into the correct order. I think the real issue with multi-WAN setups is more that a TCP connection is a contract between only two IP addresses. If packets arrive from a third address that hasn't yet opened a connection, these packets will most likely be dropped. I'm actually baffled by how these TP-Link devices can download a single file over SpeedTest.net & have it split the stream over both WAN links. – Simon East – 2016-09-08T01:46:38.073
@SimonEast: How can TCP packets arrive out of order when the sender will wait for ack on each one? – harrymc – 2016-09-08T04:45:25.267
This can happen even over a single connection to the internet because of TCP Windowing (read more here) where multiple packets are sent before waiting for acknowledgement. Since those packets could actually take different routes to their destination, the packets can arrive out of order. The networking stack is supposed to handle this reordering gracefully before passing the data to the application.
– Simon East – 2016-09-08T05:19:06.340I think there is misunderstanding between packet and frame. A packet may be larger than an Ethernet frame. – harrymc – 2016-09-08T06:36:36.710
No, my point was that TCP does not expect packets to always arrive in sequential order. It has specific mechanisms to handle out of order packets. As it says on Wikipedia, *"due to congestion, load balancing, or other network issues, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order data ... if the data still remains undelivered, its source is notified of this failure."* Your answer didn't acknowledge this.
– Simon East – 2016-09-09T00:02:29.910@SimonEast: Bare TCP cannot acknowledge out-of-order packets. Read the original decision from 1988. Reassembling packets is possible within the receive window via Selective acknowledgments, operating only if both parties support it, negotiated when the connection is established. Protocol explained here.
– harrymc – 2016-09-09T08:00:09.083