-1

I am trying to get a sense of the different networking metrics and specifically those that are about time. I find myself lost in definitions though. From the research I have done so far I found some contradictions in the definitions depending on the source but here is what I settled on (might be wrong!!):

  • Latency : The time it takes a packet to reach the destination from let's say the client or to put it simply travel time.

  • Round Trip Time (RTT) : The time it takes a request to reach the destination and return back to the client.

  • Response Time : The time it takes a request to reach the destination get processed and the result of the processing to the client.

So my questions are:

  1. Is RTT just 2 x latency?
  2. What is the difference between RTT and response time? It seems to me they are the same thing.
  3. How are the latency and the RTT calculated? How is the processing time eliminated from the response time, which is relatively easy to measure?
  4. And finally the question that all of the above originated from. When using the ping command the time displayed is Latency RTT or Response time?

Sorry for the many questions but they are all related to each other so I feel I shouldn't split them into multiple posts.

2 Answers2

0

Of those, round trip time (RTT) definitely means there and back again over a network.

Latency and response time are more generic, and might not even mean an IP network. A system have a terrible response time and user experience because the storage system is spindle based and high latency. Network latency probably is round trip, but say RTT to avoid ambiguity.

ICMP echo is not sufficient by itself as a performance measure. It is control data, and exercises different paths in both routers (CPU control plane rather than the data ASIC) and hosts (OS ICMP implementation rather than user space software). Your typical ping implementation is round trip, as the echo protocol does not include timestamps. (Few network stacks have ICMP TIMESTAMP enabled.)

How is the processing time eliminated from the response time, which is relatively easy to measure?

By measuring both. What the user actually cares about is the time until their request gets serviced. Total time may include several round trips from the user to the data center, processing time server side including perhaps contacting multiple APIs, and client side processing time.

Know the protocols of your applications. How many round trips it takes to do a thing for a user, and the network latency between components.

As we are on Stack Exchange, Stack Overflow's monitoring stack makes an interesting case study. Collect all the metrics including web browser timing, do some light profiling, and patterns emerge from the data.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
-1

1: Pretty much yes. There is a small overhead (packets do not magically get sent back) but that should be minimal and below measurement granularity (i.e. if you measure in 0.1ms, that should be below that, and it generally is).

2: Only if no processing happens on the other side that takes time. What if the other side does i.e. a password hash check and delays the response randomly (standard for this functionality so you can not deduct anything from processing time)? Or has to resize a picture and that takes half a second?

3: Not at all - they are measured. GENERALLY by either having a "do as little as possible" endpoint or using PING.

4: I am not going to RTFM for you, sorry. This is well documented in the ping command and the ping command documentation.

TomTom
  • 50,857
  • 7
  • 52
  • 134