-4
0
My friend once told me, in some debate, that whether or not someone has a fast/gigabit router doesn't impact their internet speed because the internet is much slower than 100 and 1000 mbps. I disagreed without much a very good way to explain why, so let me first ask:
Does a gigabit router vs a fast router impact the data transfer speed from the internet to an endpoint device?
I haven't found much answers online, but I think it does. Specifically I think it does because of this mathematical proof:
Fs = Final speed
Rs = Router speed
Is = Internet speed
Ft = Final time
Sd = Size of Data
Ft = (Sd / Is) + (Sd / Rs) // Time to reach router + time to reach device (from router)
Fs = Sd / Ft // final speed is equal to the data size divided by the total time
Fs = Sd / ((Sd / Is) + (Sd / Rs))
Fs = 1 / ((1 / Is) + (1 / Rs))
// or
1 / Fs = (1 / Is) + (1 / Rs) // resembles some circuit equations
// comparatively
Fs (gigabit) = 1 / ((1 / 20) + (1 / 1000)) = 19.6 Mbps
Fs (gigabit) = 1 / ((1 / 50) + (1 / 1000)) = 47.6 Mbps
Fs (fast) = 1 / ((1 / 20) + (1 / 100)) = 16.6 Mbps
Fs (fast) = 1 / ((1 / 50) + (1 / 100)) = 33.3 Mbps
And it would seem that, from this, there is quite a big difference. But the problem is, even if I'm right, I can't explain this to him in this way (not everyone is comfortable with talking in math). So, is there any authoritative reference or benchmarks that answers this question? Because I've had plenty of people say that it doesn't without much elaboration.
Edit: I should clarify that if I say internet speed at any point in time, I am referring to the speed from the internet to the end point device.
Edit: I realize that most of the answers I get are going to say no. So, I think it should be fair that these answers tell me why I'm wrong about the following assumptions in my take on this question:
Routers have bus speeds all their own (apart from internet speeds), that are constant (either 10, 100, 1000 and no in-between).
This is the way I imagine what is happening:
internet --(20Mbps)--> router --(1000Mbps)--> device
Every byte sent to a router has to be received into the router's RAM before it can be re-transmitted to the device. As opposed to flowing straight into the cable that corresponds to the device receiving data.
Update: I'm not going to accept any answer without a benchmark. Since there might not already be a posted benchmark for this, I'm going to put one together. If I'm right, I'll post the results (I'll probably post the results even if I'm wrong). If I'm wrong, I'll accept the best posted answer and call it a day.
Edit: I don't think anyone has really understood the point I'm making, so I'm very reluctant about accepting an answer. Forget, for a moment, that I'm talking about networking and consider three arbitrary bus speeds:
Starting point -b0-> (Node 1) -b1-> (Node 2) -b2-> End point
Every single bit of data has to be stopped at every node and transferred again to the next, sequentially (in this scenario, every node receives and transmits at the same time). Now consider, again, the math that calculates the amount time is takes for data (of any size) to reach the end point.
TotalTime = (DataSize / BusSpeed0) + (DataSize / BusSpeed1) + (DataSize / BusSpeed2)
TotalTime = DataSize * ((1 / BusSpeed0) + (1 / BusSpeed1) + (1 / BusSpeed2))
TotalSpeed = DataSize / TotalTime
TotalSpeed = DataSize / (DataSize * ((1 / B0) + (1 / B1) + (1 / B2)))
TotalSpeed = 1 / ((1 / B0) + (1 / B1) + (1 / B2))
This is the same way networks transmit data (the same way every wired device transmits data), so how could it be wrong?
What you're missing is that, to a first approximation, only the speed-liming step matters. Think of the data like a physical object and the links like an assembly line. If quality control is the slowest step and releases one product a minute, then the line will produce one product a minute no matter how fast or slow every other step is, so long as it's more than one per minute. – David Schwartz – 2018-05-02T16:24:55.060
6Down vote from me -
I'm not going to accept any answer without a benchmark.
- Why? You don't need a benchmark to prove this. It's simple knowledge to interpret. Users can easily show you this in their answers. – jwbensley – 2013-07-30T09:26:03.7402Do you realize that by writing
TotalTime = (DataSize / BusSpeed0) + (DataSize / BusSpeed1) + (DataSize / BusSpeed2)
, you implicitly assume that each node waits to have received the whole data before sending it to the next one ? And I'm talking about math and logic here, not networking. – Levans – 2013-07-30T20:25:26.480@Levans you may be right, I'll have to sit on that for a moment. – tay10r – 2013-07-30T20:32:36.217