We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean