Help improve this page
Want to contribute to this user guide? Scroll to the bottom of this page and select Edit this page on GitHub. Your contributions will help make our user guide better for everyone.
Enable outbound internet access for pods
Applies to: Linux
IPv4
Fargate nodes, Linux nodes with Amazon EC2 instances
If you deployed your cluster using the IPv6
family, then the information in this
topic isn't applicable to your cluster, because IPv6
addresses are not network
translated. For more information about using IPv6
with your cluster, see Assign IPv6 addresses to clusters, pods, and services.
By default, each Pod in your cluster is assigned a private
IPv4
address from a classless inter-domain routing (CIDR) block that is associated
with the VPC that the Pod is deployed in. Pods in the same VPC
communicate with each other using these private IP addresses as end points. When a
Pod communicates to any IPv4
address that isn't within a CIDR block
that's associated to your VPC, the Amazon VPC CNI plugin (for both LinuxIPv4
address to the primary private IPv4
address of the primary elastic network
interface of the node that the Pod is running on, by default
*.
Note
For Windows nodes, there are additional details to consider. By default, the
VPC
CNI plugin for Windows
Due to this behavior:
-
Your Pods can communicate with internet resources only if the node that they're running on has a public or elastic IP address assigned to it and is in a public subnet. A public subnet's associated route table has a route to an internet gateway. We recommend deploying nodes to private subnets, whenever possible.
-
For versions of the plugin earlier than
1.8.0
, resources that are in networks or VPCs that are connected to your cluster VPC using VPC peering, a transit VPC, or AWS Direct Connect can't initiate communication to your Pods behind secondary elastic network interfaces. Your Pods can initiate communication to those resources and receive responses from them, though.
If either of the following statements are true in your environment, then change the default configuration with the command that follows.
-
You have resources in networks or VPCs that are connected to your cluster VPC using VPC peering, a transit VPC, or AWS Direct Connect that need to initiate communication with your Pods using an
IPv4
address and your plugin version is earlier than1.8.0
. -
Your Pods are in a private subnet and need to communicate outbound to the internet. The subnet has a route to a NAT gateway.
kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
Note
The AWS_VPC_K8S_CNI_EXTERNALSNAT
and
AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS
CNI configuration variables aren't applicable to
Windows nodes. Disabling SNAT isn't supported for Windows. As for
excluding a list of IPv4
CIDRs from SNAT, you can define this by specifying the
ExcludedSnatCIDRs
parameter in the Windows bootstrap script. For
more information on using this parameter, see Bootstrap script
configuration parameters.
Host networking
*If a Pod's spec contains
hostNetwork=true
(default is false
), then its IP address isn't
translated to a different address. This is the case for the kube-proxy
and Amazon VPC CNI plugin for Kubernetes
Pods that run on your cluster, by default. For these Pods, the IP
address is the same as the node's primary IP address, so the Pod's IP address isn't
translated. For more information about a Pod's
hostNetwork
setting, see PodSpec v1 core