3
1
My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted:
- Does this increase a bandwidth monitoring program’s numbers?
- Does the ISP count the retransmitted packets toward the monthly cap?
Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.
Close the question for “too-localized” or “not-constructive”‽ Seriously‽ How the heck is asking how a poor network signal affects transfer counts not constructive or too localized? It is a general computer-network related question that is quite valid and informative. At least if the votes to close were to migrate it to Server Fault, then it would rational and makes sense since it fits better there (I didn’t thought of asking there), but too-localized and not-constructive‽ Ridiculous! I’m just glad that Seth managed to get his good answer in. – Synetech – 2012-03-27T22:27:51.943