"Reliable" does not mean the same thing for everyone. Reliable for TCP means that if you use it on lossy networks or on networks that corrupts and/or reorder packets, then you will not get garbled data, and will essentially receive good data at the end.
The problem is that TCP sucks, and while it works in these cases, it's just horribly slow. So when you use raw TCP on links that regularly loose 0.5% of packets, you will get much less performance than the theoretical data rate of (link_data_rate / (1 - packet loss rate)) on a single link, because of the TCP assumption that all packet loss are caused by congestion.
TCP was designed for networks that seldom loose packets, so TCP only has to tolerate packet loss.
One of the tasks of the reliable link thing on layer two is mainly here to compensate for TCP suckiness. They aren't supposed to be 100% reliable like TCP. For example, 802.11 accept to loose packet if the retransmission count get above a certain threshold, whereas TCP doesn't and will retransmit forever until the application or user decides that it is enough.
The reliable link thing on layer 2 is mainly there for speed. For example, on 802.11, the ack mechanism is also used by nodes so that they can decrease the modulation speed if the wireless link degrades. When 802.11 can't do ACK (e.g. for multicast frames), it typically uses the lowest modulation rate, which is often 1 Mb/s, to get maximum reliability, at the huge expense of speed.
Sometimes, when you have a network with many highly unreliable links, you may need layer 2 reliability, otherwise the whole path may have many unreliable links and the path packet loss rate might get too high to be useful. But this is often not the case on your typical networks.
Historically, TCP picked up because it allowed routers to loose packets. So routers were simpler, faster and the global result was that handling reliability on the endpoints had better performance than handling it in the core network.
Even if all individual links were 100% reliable, a node which can be sent packets faster than it can pass them on will not be able to promise 100% reliability for packets sent under high-load conditions. TCP is designed to work on the presumption that individual links are expected to be relatively reliable. It is not designed to deal with links or nodes that drop packets for reasons other than congestion. – supercat – 2016-12-25T21:08:57.787