Centralized egress to internet - Building a Scalable and Secure Multi-VPC AWS Network Infrastructure

Centralized egress to internet

As you deploy applications in your multi-account environment, many apps will require outbound-only internet access (for example, downloading libraries, patches, or OS updates). This can be achieved for both IPv4 and IPv6 traffic. For IPv4, this can achieved through network address translation (NAT) in the form of a NAT gateway (recommended), or alternatively, a self-managed NAT instance running on an Amazon EC2 instance, as a means for all egress internet access. Internal applications reside in private subnets, while NAT Gateways/Amazon EC2 NAT instances reside in a public subnet. AWS recommends that you use NAT gateways because they provide better availability and bandwidth and require less effort on your part to administer. For more information, refer to Compare NAT gateways and NAT instances. For IPv6 traffic, egress traffic can be configured to leave each VPC through an egress only internet gateway in a decentralized manner or it can be configured to be sent to a centralized VPC using NAT instances or proxy instances. The IPv6 patterns are discussed below in the Centralized Egress for IPv6 section of this document.

Using the NAT gateway for centralized IPv4 egress

NAT gateway is a managed network address translation service. Deploying a NAT gateway in every spoke VPC can become cost prohibitive because you pay an hourly charge for every NAT gateway you deploy (refer to Amazon VPC pricing). Centralizing NAT gateways can be a viable option to reduce costs. To centralize, you create a separate egress VPC in the network services account, deploy NAT gateways in the egress VPC, and route all egress traffic from the spoke VPCs to the NAT gateways residing in the egress VPC using Transit Gateway or CloudWAN, as shown in the following figure.

Note

When you centralize NAT gateway using Transit Gateway, you pay an extra Transit Gateway data processing charge — compared to the decentralized approach of running a NAT gateway in every VPC. In some edge cases when you send huge amounts of data through NAT gateway from a VPC, keeping the NAT local in the VPC to avoid the Transit Gateway data processing charge might be a more cost-effective option.

A diagram depicting a decentralized high availability NAT gateway architecture

Decentralized high availability NAT gateway architecture

A diagram depicting a centralized NAT gateway using Transit Gateway (overview)

Centralized NAT gateway using Transit Gateway (overview)

A diagram depicting a centralized NAT gateway using Transit Gateway (route table design)

Centralized NAT gateway using Transit Gateway (route table design)

In this setup, spoke VPC attachments are associated with Route Table 1 (RT1) and are propagated to Route Table 2 (RT2). There is a Blackhole route to disallow the two VPCs from communicating with each other. If you want to allow inter-VPC communication, you can remove the 10.0.0.0/8 -> Blackhole route entry from RT1. This allows them to communicate via the transit gateway. You can also propagate the spoke VPC attachments to RT1 (or alternatively, you can use one route table and associate/propagate everything to that), enabling direct traffic flow between the VPCs using Transit Gateway.

You add a static route in RT1 pointing all traffic to egress VPC. Because of this static route, Transit Gateway sends all internet traffic through its ENIs in the egress VPC. Once in the egress VPC, traffic follows the routes defined in the subnet route table where these Transit Gateway ENIs are present. You add a route in subnet route tables pointing all traffic towards the respective NAT gateway in the same Availability Zone to minimize cross-Availability Zone (AZ) traffic. The NAT gateway subnet route table has internet gateway (IGW) as the next hop. For return traffic to flow back, you must add a static route table entry in the NAT gateway subnet route table pointing all spoke VPC bound traffic to Transit Gateway as the next hop.

High availability

For high availability, you should use more than one NAT gateway (one in each Availability Zone). If a NAT gateway is unavailable, traffic might be dropped in that Availability Zone that is traversing the affected NAT gateway. If one Availability Zone is unavailable, the Transit Gateway endpoint along with NAT gateway in that Availability Zone will fail, and all traffic will flow via the Transit Gateway and NAT gateway endpoints in the other Availability Zone.

Security

You can rely on security groups on the source instances, blackhole routes in the Transit Gateway route tables, and the network ACL of the subnet in which the NAT gateway is located. For example, customers can use ACLs on the NAT Gateway public subnet(s) to allow or block source or destination IP addresses. Alternatively, you can use NAT Gateway with AWS Network Firewall for centralized egress described in the next section to meet this requirement.

Scalability

A single NAT gateway can support up to 55,000 simultaneous connections per assigned IP address to each unique destination. You can request a quota adjustment to allow up to eight assigned IP addresses, allowing for 440,000 simultaneous connections to a single destination IP and port. NAT gateway provides 5 Gbps of bandwidth and automatically scales to 100 Gbps. Transit Gateway generally does not act as a load balancer and would not distribute your traffic evenly across NAT gateways in the multiple Availability Zones. The traffic across the Transit Gateway will stay within an Availability Zone, if possible. If the Amazon EC2 instance initiating traffic is in Availability Zone 1, traffic will flow out of the Transit Gateway elastic network interface in the same Availability Zone 1 in the egress VPC and will flow to the next hop based on that subnet route table that elastic network interface resides in. For a complete list of rules, refer to NAT gateways in the Amazon Virtual Private Cloud documentation.

For more information, refer to the Creating a single internet exit point from multiple VPCs Using AWS Transit Gateway blog post.