6

I am setting up infrastructure for a startup which pretty much will not have a lot of traffic, but should be able to scale when needed.

We are favouring a setup with a LB, that distributes traffic to the frontend nodes in a dedicated private subnet (over 3 availability zones), which in turn make request to the backed nodes on their own dedicated private subnet, which in turn makes requests to mongodb managed via atlas and vpc peering.

In order for each node to provision it requires internet access. The backend nodes also make request to third party services and therefor require internet access when they are running as well.

I see three options:

  • set up a nat gateway for each private subnet in each availability zone. Depending on location this amounts to around 30$ per subnet per availability zone. With 3 availability zones and 2 subnets this will total to around 180$ a month, which is actually more than we plan to use for the ec2 instances while there is not much traffic and load on the system. We could probably cut that down to just use 1 nat gateways in each availability zone for all the private subnets, but still that's around 90$.

  • set up ec2 instances as nat gateway, which will probably be a little cheaper, however requires maintenance and setup.

  • just use one private subnet, assign public ips to each node and use the internet gateway, via route table entries. I don't think using dedicated private subnets will make much sense as the nodes should be able to connect with each other via the gateway anyway.

The last option will most likely be the cheapest option as one elastic ip is already included within an ec2 instance and dedicated gateways are not needed. However I was wondering if there is a significant downside or risk involved in doing so? We plan to return to the idea with dedicated subnets when there is a need to (like there is significant traffic), but we really would like to keep costs as a low as possible in the beginning.

st-h
  • 257
  • 2
  • 8

2 Answers2

15

You seem to be laboring under some misunderstandings about network fundamentals in VPC.

set up a nat gateway for each private subnet in each availability zone.

For all practical purposes, this is never something you would actually need to do.

The maxiumum number of NAT Gateways you would ever need in a single VPC would be 1 per AZ.

NAT Gateways are never placed on (any of) the subnet(s) they serve. NAT Gateways are placed on a public subnet, which has a default route pointing to the Internet Gateway. They then provide NAT services to instances on other subnets, where the NAT Gateway is specified as the default gateway for those subnets.

So the number of private subnets in am AZ has no relationship to the number of NAT Gateways. Unless you need Internet bandwidths in excess of 45 Gbit/s per AZ, you don't need multiple NAT Gateways.

Next, you do not technically need but one NAT Gateway per VPC. A NAT Gateway is a logical entity, not a physical one, so there is no known mechanism by which one can "fail" (except at initial creation, when there's a possibility of it failing to be defined). The things that suggest against sharing a NAT Gateway across regions are these:

  • You'll pay for cross-region traffic using the gateway. As long as this is less than the cost of the gateway, it could still make sense.
  • You'll see a slight uptick in latency, typically single digit milliseconds, for cross-zone traffic using the NAT Gateway to access the Internet. This is a tradeoff, but may be insignificant.
  • The complete outage, failure, loss, or destruction of the AZ hosting the gateway will result in loss of use of the gateway across all AZs but this has apparently never happened to date.

Next, using EC2 instances as NAT devices requires almost no setup. The stock AMI for NAT Instances is zero-config at the instance level. You can also build your own. EC2 Instance Recovery can repair a NAT Instance when the underlying hardware actually fails or the hypervisor becomes unresponsive (rare but possible).

I don't think using dedicated private subnets will make much sense as the nodes should be able to connect with each other via the gateway anyway.

That isn't really relevant either way. Dedicated subnets or a lack of them has no real impact on the way instances communicate with each other -- only the way they believe they are communicating. The "default router" on each subnet in a VPC is an imaginary device that exists for compatibility with the way IP over Ethernet works. When two instances in a VPC are allowed by Security Groups and Network ACLs to communicate, the way their actual traffic gets from one instance to another is identical, regardless of whether the two instances are on the same subnet or not.

Cross-subnet, an instance goes through the motions of arping the default gateway and sending traffic to it, meanwhile the hypervisor plays along but effectively ignores all of that and sends the traffic directly to the other instance's hypervisor. Within a subnet, the instance arps for its peer, the hypervisor spoofs that response (the ARP "who has" never appears at the target instance, yet the source instance sees the response the target never generated) and the node-to-node traffic follows exactly the same path as before.

We all did fine for years using EC2 instances as NAT Instances because that was the only option -- NAT Gateway is a relatively new service. If you are trying to save costs, go with that. Or go with that in all AZs except for one, and use one NAT Gateway in that one AZ.

Add VPC endpoints for services that support them, like S3 and DynamoDB, since these endpoints allow you to access those services without a NAT device.

Michael - sqlbot
  • 21,988
  • 1
  • 57
  • 81
  • Thank you very much for the detailed answer. Really enjoyed reading on how things work behind the scenes. My assumptions came from using the terraform vpc module, which as a default creates a nat gateway for each private subnet, which I assumed to be a sensible default... But, indeed when using the internet gateway to provide internet access to nodes, it is possible to reach them from outside (even if they are in private subnets). So, either a single nat gateway or a ec2 instance replacing the nat gateway are the only sensible solutions here. – st-h Nov 05 '18 at 11:25
  • 2
    @SteveHummingbird, thanks. Note also, if a subnet uses a route table that has its default route (or, technically, any route) pointing to the VPC's Internet Gateway... then that subnet is, by definition, **not** a private subnet. The lack of any routes pointing to the IGW is exactly what makes a subnet "private" -- it's not a separate attribute. – Michael - sqlbot Nov 05 '18 at 12:28
  • @Michael-sqlbot, thanks for a very detailed post and info about NAT ec2 instances. Though I have some doubts regarding your statement that single NAT gateway deployments are safe and that full AZ failure has never happened to date. I have read about cases where entire AZ has lost power for an extended periods of time such as this: https://www.datacenterdynamics.com/en/news/power-event-at-aws-data-center-disrupts-us-east-1/ , https://www.datacenterknowledge.com/archives/2012/06/30/amazon-data-center-loses-power-during-storm . Am I missing something? Thanks! – SoftwareFactor Jul 13 '20 at 14:18
  • 1
    @SoftwareFactor technically that wasn't the loss of an availability zone, it was the loss of a portion of one AZ. (In some regions, like us-east-1, there are availability zones that span multiple buildings, and that's what this was.) But, that was also before NAT Gateways existed and it isn't clear whether that outage would have caused such a failure. Still, good point. However... I would not use the word "safe" nor recommend using only a single NAT Gateway in any important environment. It would typically be perfectly fine in dev/qa/staging. They are badly overpriced. – Michael - sqlbot Jul 15 '20 at 12:25
5

You can route outbound traffic from several private subnets to the same natgw instance.

Sometimes I create a separate public "management" subnet, where the the natgw (or similar resources) is located, and provide all private subnets which should be able to access internet with according route to that natgw.

That layout makes it more obvious that this subnet is for special purpose and eventually used by several other separated subnets. Typically I assign such subnet a tiny CIDR.

hargut
  • 3,848
  • 6
  • 10