20

I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.

How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).

Thanks!

neezer
  • 790
  • 3
  • 10
  • 28
  • Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :( – Mark Henderson Sep 02 '09 at 21:37

6 Answers6

11

All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.

I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.

kubanczyk
  • 13,502
  • 5
  • 40
  • 55
  • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good. – Ryaner Sep 02 '09 at 22:08
  • @Ryaner, What do you mean by ~"*Fibre repeaters have zero latency*"? How is this possible? – Pacerier Feb 11 '17 at 10:19
  • http://www.lightwaveonline.com/articles/print/volume-29/issue-6/feature/network-latency-how-low-can-you-go.html covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers. – Ryaner Feb 13 '17 at 11:24
10

There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
6

We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.

Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.

We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.

Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.

Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...

From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.

Sean

Sean Reifschneider
  • 10,370
  • 3
  • 24
  • 28
3

Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.

EBGreen
  • 1,443
  • 11
  • 10
3

The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.

A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.

For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute

Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping

Noah Campbell
  • 599
  • 2
  • 8
  • 15
2

I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.

In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.

l0c0b0x
  • 11,697
  • 6
  • 46
  • 76
  • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: http://serverfault.com/questions/21048/what-tools-should-every-sysadmin-use-that-no-ones-heard-of/21053#21053 – l0c0b0x Sep 02 '09 at 21:47