-1

I have web server on local box(at India) which connects to Amazon RDS(at US location). In this case response is too slow as web server at India fetching data from US Amazon RDS.

But when i travel to US and access the same RDS from same local box. It works like data is fetched from local DB.

Speed at both ISP's (India and US) is at 10 Mbps. Per mine understanding distance should not matter as electrons can travel instantaneously. Can the switch b/w country wide network and international wide network takes time ?

Basically i am trying to understand why speed differs based on web server location ? Will moving RDS to India location help in speed ?

user3198603
  • 129
  • 4

3 Answers3

3

Electrons do not "travel instantaneously". You need to read up on latency, which is as important as bandwidth. Latency effectively reduces bandwidth, especially on TCP networks.

The packets will likely via a variety of mechanisms to reach the destination. Electrons over copper ethernet, light over fiber, etc. There are many hops to reach the destination, going through many routers. These routers add additional time to the path.

Basically, packets of data will be taking 100-200ms to get from India to the US, and the same again back. It takes a few round trips to set up a TCP connection. This is why it's faster when you're near the database.

If you move RDS to India you'll very likely get better application performance, through reduced latency. It depends a little on your application on how much benefit you get.

Tim
  • 30,383
  • 6
  • 47
  • 77
  • When you say `It takes a few round trips to set up a TCP connection. This is why it's faster when you're near the database.` Is it a TCP connection b/w webserver and DB also like b/w browser and webserver ? – user3198603 Apr 09 '17 at 13:30
  • Yes, the web server connects to the DB server in a very similar way that the client connects to the web server. It takes even more round trips to set up an HTTPS connection than TCP. TCP info here http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml – Tim Apr 09 '17 at 18:59
2

Your understanding of the speed of light is a bit off. As is your understanding of the complexity of getting a packet from one side of the globe to another.

So, the speed of light in glass does play a bit of a role in packet latency, but more than that is link contention and latency induced by routers, switches, and other devices your packet travels through on its journey. There is absolutely nothing you can do to improve this type of performance other than bringing the two servers closer together.

So yes, perhaps hosting the database in India may improve performance, but maybe not. You should test this to find out.

In general, though, separating applications servers from their database like you are is a horrible practice for many reasons. You should co-locate them together if at all possible.

EEAA
  • 108,414
  • 18
  • 172
  • 242
0

Moving the database to the same region as your web server will certainly improve performance. The electrons travel along the wire, not at light speed, but very rapidly, the issue is the packets you send must be processed at several routers and firewalls as it passes through countries / ISPs. This matter is complicated by the fact that the trip must be made back and forth several times before any information is transmitted.

If you cannot move the RDS database to the same region, there is the option of setting up a caching server in the India region. If the AWS Service Elasticache is available in the India region, you can spin up a cluster (if it's not available you could spin up your own server running memcached to do the same thing) and that will make the long trip to the US and back, and store a copy of the data in memory where it can be quickly accessed by your web server. This will only improve common read requests and all write requests would still need to go to the US.

Another option might be to spin up another RDS in the India region and institute a multi-master solution, where the two databases will synchronize with each other. It's a little dicey with them being so far apart, but it ought to be fine as long as there are a low volume of requests. However, a multi-master database set-up is not a trivial thing to set-up and manage.

If you have the capability of moving the web server to the U.S. (depending on the type of content you're serving) you can set up content distribution with cloudfront in the India region and customers in India will be able to access your content with better latency (but customer requests doing writes will still have to make the long trip). Also, when moving servers to the U.S. be aware of the Patriot Act and what it may mean for your ability to ensure confidentiality of the data you're storing (though I'm guessing this isn't an issue if you have your Database in the U.S. already).

  • when you say `If you cannot move the RDS database to the same region, there is the option of setting up a caching server in the India region.` does it mean entire oracle db will be stored in cache and data against queries will be served from memory ? If yes how data in memory will be synchronized when there is update or insert ? – user3198603 Apr 09 '17 at 13:26
  • The update or insert queries will have to go all the way back to the database and suffer a performance penalty. memcached / elasticache / redis will cache the the most frequently used portions read-only, and so the select statements will benefit the most. – TopherIsSwell Apr 09 '17 at 13:33