Transatlantic ping faster than sending a pixel to the screen?



John Carmack tweeted,

I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?

Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).

Konrad Rudolph

Posted 2012-05-01T09:30:45.533

Reputation: 7 043

2This complaint is spurious. It's not a problem, and furthermore it makes complete sense. Because (unless the person plugging the desktop monitor into the VGA / HDMI / DVI port has very specialized requirements and is also an idiot) that "screen" he's talking about is meant to be processed by the human visual system. Which processes frames at ~30 fps. Network packets are used, among other things, to sync clocks. Human eyes aren't getting any better, nor is our optical cortex getting any faster, so why should our screens update more often? Is he trying to embed subliminal messages in his games? – Parthian Shot – 2014-07-13T05:20:09.617

So I suppose my parenthetical answer to your question "How can this be true?" is "There is no logical reason for people to pour resources into one over the other". At the moment, output frame rates on normal display devices are far faster than the human eye can detect. They're better than they need to be already. Networking, however, allows for distributed processing; it is what drives supercomputers. It still needs work. – Parthian Shot – 2014-07-13T05:27:42.270

@Parthian There’s nothing “spurious” here, because your reasoning contains two errors. The first error is that even with high latency you can presumably develop protocols to update clocks. In fact, when I ping a site in the US, the latency is three times too high for 30 FPS (~100 ms). Second of all, your fancy reasoning simply ignores hard constraints placed by physics: due to the speed of light, the minimum ping we can hope to attain is 32 ms, which is the same as the human eye’s refresh rate, and this ignores lots of fancy signal processing on the way. – Konrad Rudolph – 2014-07-13T10:06:57.870

@Parthian To make the signal processing point more salient: read John’s answer about the latencies inherent in display hardware, and then his statement that “[t]he bad performance on the Sony is due to poor software engineering”. On the network side, the signal needs to cross (at the least) through the network card, the router, a server this side of the atlantic, and all this twice. And you are saying that all this can be done trivially (because, hey, my question is spurious) in <1 ms, whereas the video system has higher latencies than this for several of its steps (see John’s answer again). – Konrad Rudolph – 2014-07-13T10:12:28.130

2@KonradRudolph "even with high latency you can presumably develop protocols to update clocks" I didn't say "with high latency", and there is such a protocol. It's called NTP, and it's used pretty much everywhere. "when I ping a site in the US, the latency is three times too high for 30 FPS" You're making my point; namely, that network speed needs to improve, but display technology doesn't. So OF COURSE more research needs to go into networks. – Parthian Shot – 2014-07-14T15:30:45.797

2@KonradRudolph " your fancy reasoning simply ignores hard constraints placed by physics" I'm a computer engineer. So, yes, I've taken some special relativity. That's kind of orthogonal to my point. "you are saying that all this can be done trivially" I'm not. What I'm saying is that people have put way more effort into making it faster because it needs to be faster, but no one puts effort into display technology because it doesn't. Hence, one is much faster; not because it's easier, but because people have worked way harder on it. – Parthian Shot – 2014-07-14T15:32:56.737

@ParthianShot I know that there is such a protocol. From your comment it appeared as if you didn’t. – To your overall point: you claim that my question is moot because of reasons, but I’ve shown that these reasons are simply not a sufficient argument, and partially false. And when you say “you’re making my point” – no, I’ve contradicted it. To make it blindingly obvious: the best ping we can hope for under ideal conditions is just barely on par with adequate (not great) display speed, so there’s no reason to assume it should be faster. – Konrad Rudolph – 2014-07-14T15:40:08.187

2@KonradRudolph "the best ping we can hope for under ideal conditions is just barely on par with adequate (not great) display speed" ...Okay, I think you don't get the point I'm trying to make, because I agree with that. "so there’s no reason to assume it should be faster" And I agree with that. What I'm saying is, while there's no physical reason display devices would need to be slow, there's no financial reason for them to be fast. Physically, there's no reason there can't be a nine-ton pile of mashed potatoes in the middle of Idaho. And that would be way easier than going to the moon. – Parthian Shot – 2014-07-14T18:00:36.690

2@KonradRudolph But we've been to the moon, and there isn't an enormous pile of mashed potatoes at the center of Idaho, because no one cares enough to build or pay for such a pile. In the same way that no one cares enough to make affordable and widespread display technology that updates more than adequately. Because adequate is... adequate. – Parthian Shot – 2014-07-14T18:01:49.593

My ping time to Google is 10ms and my screen is 60hz (16ms pixel time).Just normal ADSL internet and Wireless-N – Suici Doga – 2016-09-10T03:10:48.693

You are all drowning in a glass of water! There are many factors involved that constantly create random latency. Think about it. – Frank R. – 2017-05-22T00:32:23.800

@FrankR. I think we’re all very well aware of that. The question is simply what the upper bound on these latencies is; and they can be quantified, and meaningfully compared, as the answers show. – Konrad Rudolph – 2017-05-22T12:05:28.160

51Either he's crazy or this is an unusual situation. Due to the speed of light in fiber, you cannot get data from the US to Europe in less than about 60 milliseconds one way. Your video card puts out an entire new screen of pixels every 17 milliseconds or so. Even with double buffering, you can still beat the packet by quite a bit. – David Schwartz – 2012-05-01T09:38:13.497

86@DavidSchwartz: You're thinking of the GPU in isolation. Yes, the GPU can do a whole lot of work in less than 60ms. But John is complaining about the entire chain, which involves the monitor. Do you know how much latency is involved, from the image data is transmitted to the monitor, and until it is shown on the screen? The 17ms figure is meaningless and irrelevant. Yes, the GPU prepares a new image every 17 ms, and yes, the screen displays a new image every 17 ms. But that says nothing about how long the image has been en route before it was displayed – jalf – 2012-05-01T09:59:27.843

@user1203: That's why I said, "even with double buffering". – David Schwartz – 2012-05-01T10:30:17.663

25He's a game programmer, and he said *faster than I can send a pixel to the screen*... so perhaps account for 3D graphics rendering delay? Though that should be quite low in most video games; they optimise for performance, not quality. And of course, there's the very high chance he's just exaggerating (there, I stated the obvious, happy?). – Bob – 2012-05-01T10:51:13.173

20Go to Best Buy some time and watch all the TV sets, where they have them all tuned to the same in-house channel. Even apparently identical sets will have a noticeable (perhaps quarter-second) lag relative to each other. But beyond that there's having to implement the whole "draw" cycle inside the UI (which may involve re-rendering several "layers" of the image). And, of course, if 3-D rendering or some such is required that adds significant delay. – Daniel R Hicks – 2012-05-01T11:43:25.657

5There is a lot of room for speculation in question, I don't think there is a perfect answer unless you know what J.Carmack was really talking about. Maybe his tweet was just some stupid comment on some situation he encountered. – Baarn – 2012-05-01T12:09:19.247

2@Walter True. I asked the question because a lot of people retweeted it, suggesting some deep insight. Or not. I’d still be interested in a calculation comparing the two raw operations. As such, I don’t think the question is “not constructive”, as at least two people seem to think. – Konrad Rudolph – 2012-05-01T12:24:59.287

I think this question is very interesting, too. If an answer adding up all possible delays in modern hardware is acceptable for you, I don't see a problem. – Baarn – 2012-05-01T13:22:13.027

@slhck So far there’s only one answer, which isn’t speculating at all. But I’ll edit the question to make it clearer. EDIT Updated. Please consider all other discussions about the meaning of the tweet as off-topic. – Konrad Rudolph – 2012-05-01T14:01:27.543

Reminds me of the discussion on neutrinos being faster than light. No potential measurement errors anywhere?

– None – 2012-05-01T15:41:50.173

Of course. But reading John’s answer the measuring is pretty straightforward. There are plenty of opportunities for errors to creep in, but not so much in his measurements … – Konrad Rudolph – 2012-05-01T15:44:59.550

@DavidSchwartz double buffering still causes buffer dead-locks. You can only eliminate the deadlock using a triple buffer... – Breakthrough – 2012-05-01T16:22:14.560

2@DavidSchwartz - distance Boston to London ~5000 km; add in a ~1000 km for non-direct route to a server directly on the other side; you get 20 ms one way travel time by speed of light 20 ms = 6 000 km/(300 000 km/s) = 20 ms as roughly the lower limit. – dr jimbob – 2012-05-01T17:06:16.153

Note that a ping, an ICMP Echo Request, may be handled by software at the driver level or immediately above it at the bottom of the networking stack. – Tommy McGuire – 2012-05-01T19:40:41.757

3The point is not that it was a very fast packet, but a very slow pixel. – Crashworks – 2012-05-01T23:53:19.737

3@drjimbob the speed of light in fiber is a bit slower than in vacum, it's just ~ 200 000 km/s. So the rough lower limit is ~60ms for a two-way trip. – kolinko – 2012-05-02T08:34:50.607

2@Merlin - Completely agree; which is why I presented it as a lower limit (and was doing one-way trip). Note that while optical fiber/coax-cable/ethernet cable is ~0.7 c (200 000 km/s), there are a couple of ways you could send an IP packet one-way significantly faster -- say transmission by satellite/radio (~.99c) or a ladder-line (~0.95c). – dr jimbob – 2012-05-02T13:41:46.267

Couldn't the ping be actually be served by a cache from the ISP? Isn't a traceroute pretty much the only way to tell if it's actually making it across the ocean? – Michael Frederick – 2012-05-02T20:31:38.440

@rickyduck You should have read the article linked by Neutrino. He’s saying the same as you. – Konrad Rudolph – 2012-05-03T16:06:27.723

3@drjimbob, transmission by satellite is even slower since the signal has much further to go. Typical satellite ping times are more like 200-300 ms. – psusi – 2012-05-03T18:07:14.740

@MichaelFrederick, no, there is no such thing as caching for pings. Traceroute uses the same underlying packet, it just sets a short TTL and increases it by one until it gets the echo from the destination. – psusi – 2012-05-03T18:08:51.323

@psusi - Yes; but that's because most satellites you would use in practice would be in a geosynchronous orbit (orbital period = earth rotation period), so they are always at visible to you at the same location in the sky (~36 000 km above earth surface + further as its not necessarily directly above you). Granted if you had a relay satellite in a low-earth orbit at ~600 km above the Earth surface which orbits the earth every ~100 minutes visible to antennas following it in Boston/London you could send a one-way IP packet in ~20 ms.

– dr jimbob – 2012-05-03T18:40:42.393

@psusi - By my calculations as long as the satellite is halfway between Boston/London; earth is a perfect sphere; and the satellite is at a height d >= (sec θ - 1)* R= 521 km (where R is the radius of the Earth ~6400 km) where θ ~ 2500km/6400km ~ 0.4 rads the angle between Boston/satellite (also same between satellite/London) then the satellite can be seen by both with a lower limit of total travel distance of 2sqrt( (r+d)^2 - r^2) = 5270 km at a one-way travel speed of ~18 ms. I use c to say lower limit; as faster methods then 0.7c are feasible - though not in practice. – dr jimbob – 2012-05-03T18:47:47.767

Today people actually learns that electronics are under what makes programming work. Programming is accessible to everyone, but designing stuff like an entire computer is not up to everyone and have big repercutions in term of cost and make-bility. Graphics chips are so much different than other chips, and data still has to go through the screen hardware. Technology and physics are not as simple as programming is, and it costs money. Deal with it people. But still it'd quite cool if carmack could change things like he did for gfx cards ! – jokoon – 2012-05-03T19:40:34.293

@KonradRudolph I was just adding to the conversation, my article claimed that it was two errors, it was more of a reference than a reply – rickyduck – 2012-05-04T08:12:14.563

Transatlantic cables, see the CANCAT 3 cable in Time for light from Nova Scotia to Iceland (part of Europe) in fiber is 16.7 ms, see

– FredrikD – 2012-09-13T09:35:39.400

2Apparently you can do a transatlantic ping faster, but that also means you wouldn't see it on the screen ;) – Stormenet – 2012-10-12T06:32:54.240


1 339

The time to send a packet to a remote host is half the time reported by ping, which measures a round trip time.

The display I was measuring was a Sony HMZ-T1 head mounted display connected to a PC.

To measure display latency, I have a small program that sits in a spin loop polling a game controller, doing a clear to a different color and swapping buffers whenever a button is pressed. I video record showing both the game controller and the screen with a 240 fps camera, then count the number of frames between the button being pressed and the screen starting to show a change.

The game controller updates at 250 Hz, but there is no direct way to measure the latency on the input path (I wish I could still wire things to a parallel port and use in/out Sam instructions). As a control experiment, I do the same test on an old CRT display with a 170 Hz vertical retrace. Aero and multiple monitors can introduce extra latency, but under optimal conditions you will usually see a color change starting at some point on the screen (vsync disabled) two 240 Hz frames after the button goes down. It seems there is 8 ms or so of latency going through the USB HID processing, but I would like to nail this down better in the future.

It is not uncommon to see desktop LCD monitors take 10+ 240 Hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds.

This was in a multimonitor setup, so a couple frames are the driver's fault.

Some latency is intrinsic to a technology. LCD panels take 4-20 milliseconds to actually change, depending on the technology. Single chip LCoS displays must buffer one video frame to convert from packed pixels to sequential color planes. Laser raster displays need some amount of buffering to convert from raster return to back and forth scanning patterns. A frame-sequential or top-bottom split stereo 3D display can't update mid frame half the time.

OLED displays should be among the very best, as demonstrated by an eMagin Z800, which is comparable to a 60 Hz CRT in latency, better than any other non-CRT I tested.

The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.

This is very unfortunate, but it is all fixable, and I hope to lean on display manufacturers more about latency in the future.

John Carmack

Posted 2012-05-01T09:30:45.533

Reputation: 6 412

220I'd like to not have to lock this answer for excessive off-topic comments. We're all thrilled that John provided this answer, but we don't need 25 comments all expressing their gratitude, disbelief, or excitement. Thank you. – nhinkle – 2012-05-02T08:48:52.967

29Your USB trigger is probably running as a Low speed USB device (bus frames at 125usec) causing a minimal 8ms delay (hardware issue). Maybe try a PS2 keyboard instead ? – Boris – 2012-05-02T09:10:40.753

3It would help if the timing you got for the monitor was more clearly expressed. Had to hunt a bit to find 70ms in your (otherwise well written) answer. :) – Macke – 2012-05-03T06:12:00.513

6@Marcus Lindblom by hunt for, you mean read? I think in this case, how he got to his number is just as important as the number - the skepticism regarding the tweet is not going to be addressed by citing another number. Also the context helps - he was most directly annoyed by this specific monitor with its sub-optimal software. – Jeremy – 2012-05-03T11:54:47.050

14It sounds like you are saying that when LCD makers claim say, a 5ms response time, that may be time time it takes the raw panel to change, but the monitor adds quite a bit more time buffering and processing the signal before it actually drives the LCD. Doesn't that mean the manufacturers are publishing false/misleading specs? – psusi – 2012-05-03T18:19:10.810

2Hopefully in the future, direct-view LED displays will be readily available. Sony has announced one that will be coming out within the next year or two, and I actually had an opportunity to look at one and talk to one of the engineers behind it. I specifically asked about latency, and he said it was on the order of nanoseconds. Plus a 60" screen was razor-thin, lightweight, and took something like 20 watts to operate, so I mean, how is this NOT a winning technology? – fluffy – 2012-05-14T15:18:47.683

5Here's how I measure display latency: Most chipsets provide some GPIO pins, which you can toggle with an outp instruction (your program must run very privileged of course for this to work). Then clone the screen on a digital and a analogue connection. The display goes to digital. Put a photodiode on the display and hook up the analoge video and the photodiode to an oscilloscope, and the scope's external trigger to the GPIO. Now you can use the GPIO for triggerin and accurately measure the time it takes for the signal to appear on the line and the display. – datenwolf – 2012-05-22T09:56:50.523


Some monitors can have significant input lag

Accounting for an awesome internet connection compared to a crappy monitor and video card combo its possible


Console Gaming: The Lag Factor • Page 2

So, at 30FPS we get baseline performance of eight frames/133ms, but in the second clip where the game has dropped to 24FPS, there is a clear 12 frames/200ms delay between me pulling the trigger, and Niko beginning the shotgun firing animation. That's 200ms plus the additional delay from your screen. Ouch.

A Display can add another 5-10ms

So, a console can have upto 210ms of lag

And, as per David's comment the best case should be about 70ms for sending a packet


Posted 2012-05-01T09:30:45.533

Reputation: 3 682

1-1 I don't think that John Carmack uses a crappy monitor or video card. Please reference your claim with credible sources. – Baarn – 2012-05-01T10:41:15.560

@WalterMaier-Murdnelch added source. Its a console, but I imagine a PC would have similar lags – Akash – 2012-05-01T10:48:37.013

14Sorry but I still don’t see this really answering the question. The quote tells about “pulling the trigger” and this implies much more work, as in input processing, scene rendering etc., than just sending a pixel to the screen. Also, human reaction speed is relatively lousy compared to modern hardware performance. The time between the guy thinking he pulled the trigger, and actually pulling it, could well be the bottleneck. – Konrad Rudolph – 2012-05-01T10:57:55.947

2The linked article shows that the author of this analysis purchased a special device that can show you exactly when the button was pressed, so I don't think they're just winging the numbers. – Melikoth – 2012-05-01T13:40:17.560

11@KonradRudolph: Perception is pretty weird stuff. I read an article a while ago about an experimental controller that read impulses directly off the spinal cord. People would feel that the computer was acting before they had clicked, even though it was their own nerve command to click it was reacting to. – Zan Lynx – 2012-05-01T16:48:43.743

11@Zan Lynx: This is a known effect. Google for "Benjamin Libet's Half Second Delay". Human consciousness requires significant processing time. Everything that think is happening now actually happened in the past. All your senses are giving you an "integrated multi-media experience" of an event from half a second ago. Furthermore, events appear to be "time stamped" by the brain. A direct brain stimulation has to be delayed relative to a tactile stimulation in order for the subject to report the sensations as simultaneous! – Kaz – 2012-05-01T21:24:54.400


It is very simple to demonstrate input lag on monitors, just stick an lcd next to a crt and show a clock or an animation filling the screen and record it. One can be a second or more behind. It is something that LCD manufacturers have tightened up on since gamers, etc have noticed it more.

Eg. Youtube Video: Input Lag Test Vizio VL420M


Posted 2012-05-01T09:30:45.533

Reputation: 1 591