4

This is kind of a two part question. So I have a DigitalOcean Droplet in Toronto with a lamp stack on it (with a website of course). I want to be create a snapshot of that droplet and deploy a clone in perhaps San Francisco and Amsterdam.

  1. How do i accomplish routing to the server with the least latency based on location?

  2. How do I clone these sites in real time. For example if I edit the website, the change is reflected over all the servers?

Thanks

  • I think that serverfault.com is not using the answers to this question :) load times from the Paris airport are quite slow. Maybe it’s my connection. – Sam Creamer Feb 08 '19 at 06:57

2 Answers2

4

You can use AWS Route 53 latency based routing. I'm fairly confident you can use R53 with non-AWS sources.

Alternately CloudFlare have a traffic manager which can do "geo steering". It may still be in beta, and I don't know if it's on the free or paid plan.

Update I just noticed your second question. I'm going to assume your aim is primarily page load time, with a secondary concern of service availability. I'm further going to assume the website is Wordpress, because it's more difficult to do what I'm suggesting below with stock software than custom written.

If you want the two sites serving traffic at the same time, in sync, you'll have to consider some kind of multi-master database replication. This isn't that simple, but there are techniques to do it. Digital Ocean has a tutorial here. RSync or BitTorrent sync will will deal with file replication.

If your sole goal is fast response times then you might be just as well off using a single server with a CDN to ensure your static resources are served locally - CloudFlare is good for that. Your latency for one request for the page probably isn't that significant, around 100ms extra, other resources will be served from the nearest node.

If your sole goal is redundancy in case of failure you might still consider for the database a master/read replica type scenario. Have all traffic routed to one server, with the database and files replicating live to the second site. If the main site goes down you fail over to the second site. If this happens you need to work out what to do when the main site comes back online, how you get things back in sync. In this case multi master still might be easiest, to keep things in sync.

Unfortunately what you're trying to do isn't entirely trivial, and can be moderately to exceptionally complex depending on your use cases. We really need to understand your goals to offer better solutions.

Tim
  • 30,383
  • 6
  • 47
  • 77
  • Route 53 LBR works with external services, as long as you understand that it is designed to identify the nearest AWS region to the browser and return the record for the service endpoint you've designated "for" that region (though the service endpoint does not in fact need to be "in" that region). OP would need to transform to their topology into a mapping like us-east-1 > Toronto, us-west-1/us-west-2 > San Francisco, eu-central-1/eu-west-2 > Amsterdam which would probably achieve a result that would, overall, be solid though imprecise. – Michael - sqlbot Oct 10 '16 at 03:27
2

Tim gave good solutions for your first question.

In the second one, (unfortunely) no, this belongs to "cache invalidation" issues. There are serveral solutions:

  • Use one server and use CDN services (Cloudflare/CloudFront) to cache contents. You can setup TTL to handle cache invalidation (i.e. "eventual consistency" model), or,
  • Use one server and use CDN services, and send purge requests once you have updated contents.
  • Keep three servers, then you will need to develop API(s) to do purge yourself.
Gea-Suan Lin
  • 636
  • 4
  • 6