818
272
I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?
And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.
But this is John Carmack.
How can this be true?
To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:
How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?
How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?
Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.
As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).
2This complaint is spurious. It's not a problem, and furthermore it makes complete sense. Because (unless the person plugging the desktop monitor into the VGA / HDMI / DVI port has very specialized requirements and is also an idiot) that "screen" he's talking about is meant to be processed by the human visual system. Which processes frames at ~30 fps. Network packets are used, among other things, to sync clocks. Human eyes aren't getting any better, nor is our optical cortex getting any faster, so why should our screens update more often? Is he trying to embed subliminal messages in his games? – Parthian Shot – 2014-07-13T05:20:09.617
So I suppose my parenthetical answer to your question "How can this be true?" is "There is no logical reason for people to pour resources into one over the other". At the moment, output frame rates on normal display devices are far faster than the human eye can detect. They're better than they need to be already. Networking, however, allows for distributed processing; it is what drives supercomputers. It still needs work. – Parthian Shot – 2014-07-13T05:27:42.270
@Parthian There’s nothing “spurious” here, because your reasoning contains two errors. The first error is that even with high latency you can presumably develop protocols to update clocks. In fact, when I ping a site in the US, the latency is three times too high for 30 FPS (~100 ms). Second of all, your fancy reasoning simply ignores hard constraints placed by physics: due to the speed of light, the minimum ping we can hope to attain is 32 ms, which is the same as the human eye’s refresh rate, and this ignores lots of fancy signal processing on the way. – Konrad Rudolph – 2014-07-13T10:06:57.870
@Parthian To make the signal processing point more salient: read John’s answer about the latencies inherent in display hardware, and then his statement that “[t]he bad performance on the Sony is due to poor software engineering”. On the network side, the signal needs to cross (at the least) through the network card, the router, a server this side of the atlantic, and all this twice. And you are saying that all this can be done trivially (because, hey, my question is spurious) in <1 ms, whereas the video system has higher latencies than this for several of its steps (see John’s answer again). – Konrad Rudolph – 2014-07-13T10:12:28.130
2@KonradRudolph "even with high latency you can presumably develop protocols to update clocks" I didn't say "with high latency", and there is such a protocol. It's called NTP, and it's used pretty much everywhere. "when I ping a site in the US, the latency is three times too high for 30 FPS" You're making my point; namely, that network speed needs to improve, but display technology doesn't. So OF COURSE more research needs to go into networks. – Parthian Shot – 2014-07-14T15:30:45.797
2@KonradRudolph " your fancy reasoning simply ignores hard constraints placed by physics" I'm a computer engineer. So, yes, I've taken some special relativity. That's kind of orthogonal to my point. "you are saying that all this can be done trivially" I'm not. What I'm saying is that people have put way more effort into making it faster because it needs to be faster, but no one puts effort into display technology because it doesn't. Hence, one is much faster; not because it's easier, but because people have worked way harder on it. – Parthian Shot – 2014-07-14T15:32:56.737
@ParthianShot I know that there is such a protocol. From your comment it appeared as if you didn’t. – To your overall point: you claim that my question is moot because of reasons, but I’ve shown that these reasons are simply not a sufficient argument, and partially false. And when you say “you’re making my point” – no, I’ve contradicted it. To make it blindingly obvious: the best ping we can hope for under ideal conditions is just barely on par with adequate (not great) display speed, so there’s no reason to assume it should be faster. – Konrad Rudolph – 2014-07-14T15:40:08.187
2@KonradRudolph "the best ping we can hope for under ideal conditions is just barely on par with adequate (not great) display speed" ...Okay, I think you don't get the point I'm trying to make, because I agree with that. "so there’s no reason to assume it should be faster" And I agree with that. What I'm saying is, while there's no physical reason display devices would need to be slow, there's no financial reason for them to be fast. Physically, there's no reason there can't be a nine-ton pile of mashed potatoes in the middle of Idaho. And that would be way easier than going to the moon. – Parthian Shot – 2014-07-14T18:00:36.690
2@KonradRudolph But we've been to the moon, and there isn't an enormous pile of mashed potatoes at the center of Idaho, because no one cares enough to build or pay for such a pile. In the same way that no one cares enough to make affordable and widespread display technology that updates more than adequately. Because adequate is... adequate. – Parthian Shot – 2014-07-14T18:01:49.593
My ping time to Google is 10ms and my screen is 60hz (16ms pixel time).Just normal ADSL internet and Wireless-N – Suici Doga – 2016-09-10T03:10:48.693
You are all drowning in a glass of water! There are many factors involved that constantly create random latency. Think about it. – Frank R. – 2017-05-22T00:32:23.800
@FrankR. I think we’re all very well aware of that. The question is simply what the upper bound on these latencies is; and they can be quantified, and meaningfully compared, as the answers show. – Konrad Rudolph – 2017-05-22T12:05:28.160
51Either he's crazy or this is an unusual situation. Due to the speed of light in fiber, you cannot get data from the US to Europe in less than about 60 milliseconds one way. Your video card puts out an entire new screen of pixels every 17 milliseconds or so. Even with double buffering, you can still beat the packet by quite a bit. – David Schwartz – 2012-05-01T09:38:13.497
86@DavidSchwartz: You're thinking of the GPU in isolation. Yes, the GPU can do a whole lot of work in less than 60ms. But John is complaining about the entire chain, which involves the monitor. Do you know how much latency is involved, from the image data is transmitted to the monitor, and until it is shown on the screen? The 17ms figure is meaningless and irrelevant. Yes, the GPU prepares a new image every 17 ms, and yes, the screen displays a new image every 17 ms. But that says nothing about how long the image has been en route before it was displayed – jalf – 2012-05-01T09:59:27.843
@user1203: That's why I said, "even with double buffering". – David Schwartz – 2012-05-01T10:30:17.663
25He's a game programmer, and he said *faster than I can send a pixel to the screen*... so perhaps account for 3D graphics rendering delay? Though that should be quite low in most video games; they optimise for performance, not quality. And of course, there's the very high chance he's just exaggerating (there, I stated the obvious, happy?). – Bob – 2012-05-01T10:51:13.173
20Go to Best Buy some time and watch all the TV sets, where they have them all tuned to the same in-house channel. Even apparently identical sets will have a noticeable (perhaps quarter-second) lag relative to each other. But beyond that there's having to implement the whole "draw" cycle inside the UI (which may involve re-rendering several "layers" of the image). And, of course, if 3-D rendering or some such is required that adds significant delay. – Daniel R Hicks – 2012-05-01T11:43:25.657
5There is a lot of room for speculation in question, I don't think there is a perfect answer unless you know what J.Carmack was really talking about. Maybe his tweet was just some stupid comment on some situation he encountered. – Baarn – 2012-05-01T12:09:19.247
2@Walter True. I asked the question because a lot of people retweeted it, suggesting some deep insight. Or not. I’d still be interested in a calculation comparing the two raw operations. As such, I don’t think the question is “not constructive”, as at least two people seem to think. – Konrad Rudolph – 2012-05-01T12:24:59.287
I think this question is very interesting, too. If an answer adding up all possible delays in modern hardware is acceptable for you, I don't see a problem. – Baarn – 2012-05-01T13:22:13.027
@slhck So far there’s only one answer, which isn’t speculating at all. But I’ll edit the question to make it clearer. EDIT Updated. Please consider all other discussions about the meaning of the tweet as off-topic. – Konrad Rudolph – 2012-05-01T14:01:27.543
Reminds me of the discussion on neutrinos being faster than light. http://news.sciencemag.org/scienceinsider/2012/02/breaking-news-error-undoes-faster.html No potential measurement errors anywhere?
– None – 2012-05-01T15:41:50.173Of course. But reading John’s answer the measuring is pretty straightforward. There are plenty of opportunities for errors to creep in, but not so much in his measurements … – Konrad Rudolph – 2012-05-01T15:44:59.550
@DavidSchwartz double buffering still causes buffer dead-locks. You can only eliminate the deadlock using a triple buffer... – Breakthrough – 2012-05-01T16:22:14.560
2@DavidSchwartz - distance Boston to London ~5000 km; add in a ~1000 km for non-direct route to a server directly on the other side; you get 20 ms one way travel time by speed of light 20 ms = 6 000 km/(300 000 km/s) = 20 ms as roughly the lower limit. – dr jimbob – 2012-05-01T17:06:16.153
Note that a ping, an ICMP Echo Request, may be handled by software at the driver level or immediately above it at the bottom of the networking stack. – Tommy McGuire – 2012-05-01T19:40:41.757
3The point is not that it was a very fast packet, but a very slow pixel. – Crashworks – 2012-05-01T23:53:19.737
3@drjimbob the speed of light in fiber is a bit slower than in vacum, it's just ~ 200 000 km/s. So the rough lower limit is ~60ms for a two-way trip. – kolinko – 2012-05-02T08:34:50.607
2@Merlin - Completely agree; which is why I presented it as a lower limit (and was doing one-way trip). Note that while optical fiber/coax-cable/ethernet cable is ~0.7 c (200 000 km/s), there are a couple of ways you could send an IP packet one-way significantly faster -- say transmission by satellite/radio (~.99c) or a ladder-line (~0.95c). – dr jimbob – 2012-05-02T13:41:46.267
Couldn't the ping be actually be served by a cache from the ISP? Isn't a traceroute pretty much the only way to tell if it's actually making it across the ocean? – Michael Frederick – 2012-05-02T20:31:38.440
2
@Neutrino http://slatest.slate.com/posts/2012/02/23/cern_neutrinos_two_errors_to_blame_for_faster_than_light_results.html
– rickyduck – 2012-05-03T14:23:39.983@rickyduck You should have read the article linked by Neutrino. He’s saying the same as you. – Konrad Rudolph – 2012-05-03T16:06:27.723
3@drjimbob, transmission by satellite is even slower since the signal has much further to go. Typical satellite ping times are more like 200-300 ms. – psusi – 2012-05-03T18:07:14.740
@MichaelFrederick, no, there is no such thing as caching for pings. Traceroute uses the same underlying packet, it just sets a short TTL and increases it by one until it gets the echo from the destination. – psusi – 2012-05-03T18:08:51.323
@psusi - Yes; but that's because most satellites you would use in practice would be in a geosynchronous orbit (orbital period = earth rotation period), so they are always at visible to you at the same location in the sky (~36 000 km above earth surface + further as its not necessarily directly above you). Granted if you had a relay satellite in a low-earth orbit at ~600 km above the Earth surface which orbits the earth every ~100 minutes visible to antennas following it in Boston/London you could send a one-way IP packet in ~20 ms.
– dr jimbob – 2012-05-03T18:40:42.393@psusi - By my calculations as long as the satellite is halfway between Boston/London; earth is a perfect sphere; and the satellite is at a height d >= (sec θ - 1)* R= 521 km (where R is the radius of the Earth ~6400 km) where θ ~ 2500km/6400km ~ 0.4 rads the angle between Boston/satellite (also same between satellite/London) then the satellite can be seen by both with a lower limit of total travel distance of 2sqrt( (r+d)^2 - r^2) = 5270 km at a one-way travel speed of ~18 ms. I use c to say lower limit; as faster methods then 0.7c are feasible - though not in practice. – dr jimbob – 2012-05-03T18:47:47.767
Today people actually learns that electronics are under what makes programming work. Programming is accessible to everyone, but designing stuff like an entire computer is not up to everyone and have big repercutions in term of cost and make-bility. Graphics chips are so much different than other chips, and data still has to go through the screen hardware. Technology and physics are not as simple as programming is, and it costs money. Deal with it people. But still it'd quite cool if carmack could change things like he did for gfx cards ! – jokoon – 2012-05-03T19:40:34.293
@KonradRudolph I was just adding to the conversation, my article claimed that it was two errors, it was more of a reference than a reply – rickyduck – 2012-05-04T08:12:14.563
Transatlantic cables, see the CANCAT 3 cable in http://en.wikipedia.org/wiki/Transatlantic_communications_cable. Time for light from Nova Scotia to Iceland (part of Europe) in fiber is 16.7 ms, see http://www.wolframalpha.com/input/?i=distance+halifax%2C+canada+iceland
– FredrikD – 2012-09-13T09:35:39.4002Apparently you can do a transatlantic ping faster, but that also means you wouldn't see it on the screen ;) – Stormenet – 2012-10-12T06:32:54.240