Deploy private clusters with limited internet access - Amazon EKS

Deploy private clusters with limited internet access

This topic describes how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, but doesn’t have outbound internet access. If you have a local cluster on AWS Outposts, see Create Amazon Linux nodes on AWS Outposts, instead of this topic.

If you’re not familiar with Amazon EKS networking, see De-mystifying cluster networking for Amazon EKS worker nodes. If your cluster doesn’t have outbound internet access, then it must meet the following requirements:

  • Your cluster must pull images from a container registry that’s in your VPC. You can create an Amazon Elastic Container Registry in your VPC and copy container images to it for your nodes to pull from. For more information, see Copy a container image from one repository to another repository.

  • Your cluster must have endpoint private access enabled. This is required for nodes to register with the cluster endpoint. Endpoint public access is optional. For more information, see Control network access to cluster API server endpoint.

  • Self-managed Linux and Windows nodes must include the following bootstrap arguments before they’re launched. These arguments bypass Amazon EKS introspection and don’t require access to the Amazon EKS API from within the VPC.

    1. Determine the value of your cluster’s endpoint with the following command. Replace my-cluster with the name of your cluster.

      aws eks describe-cluster --name my-cluster --query cluster.endpoint --output text

      An example output is as follows.

      https://EXAMPLE108C897D9B2F1B21D5EXAMPLE.sk1.region-code.eks.amazonaws.com
    2. Determine the value of your cluster’s certificate authority with the following command. Replace my-cluster with the name of your cluster.

      aws eks describe-cluster --name my-cluster --query cluster.certificateAuthority --output text

      The returned output is a long string.

    3. Replace cluster-endpoint and certificate-authority in the following commands with the values returned in the output from the previous commands. For more information about specifying bootstrap arguments when launching self-managed nodes, see Create self-managed Amazon Linux nodes and Create self-managed Microsoft Windows nodes.

      • For Linux nodes:

        --apiserver-endpoint cluster-endpoint --b64-cluster-ca certificate-authority

        For additional arguments, see the bootstrap script on GitHub.

      • For Windows nodes:

        Note

        If you’re using custom service CIDR, then you need to specify it using the -ServiceCIDR parameter. Otherwise, the DNS resolution for Pods in the cluster will fail.

        -APIServerEndpoint cluster-endpoint -Base64ClusterCA certificate-authority

        For additional arguments, see Bootstrap script configuration parameters.

  • Your cluster’s aws-auth ConfigMap must be created from within your VPC. For more information about creating and adding entries to the aws-auth ConfigMap, enter eksctl create iamidentitymapping --help in your terminal. If the ConfigMap doesn’t exist on your server, eksctl will create it when you use the command to add an identity mapping.

  • Pods configured with IAM roles for service accounts acquire credentials from an AWS Security Token Service (AWS STS) API call. If there is no outbound internet access, you must create and use an AWS STS VPC endpoint in your VPC. Most AWS v1 SDKs use the global AWS STS endpoint by default (sts.amazonaws.com), which doesn’t use the AWS STS VPC endpoint. To use the AWS STS VPC endpoint, you might need to configure your SDK to use the regional AWS STS endpoint (sts.region-code.amazonaws.com). For more information, see Configure the AWS Security Token Service endpoint for a service account.

  • Your cluster’s VPC subnets must have a VPC interface endpoint for any AWS services that your Pods need access to. For more information, see Access an AWS service using an interface VPC endpoint. Some commonly-used services and endpoints are listed in the following table. For a complete list of endpoints, see AWS services that integrate with AWS PrivateLink in the AWS PrivateLink Guide.

    Service Endpoint

    Amazon EC2

    com.amazonaws.region-code.ec2

    Amazon Elastic Container Registry (for pulling container images)

    com.amazonaws.region-code.ecr.api, com.amazonaws.region-code.ecr.dkr, and com.amazonaws.region-code.s3

    Application Load Balancers and Network Load Balancers

    com.amazonaws.region-code.elasticloadbalancing

    AWS X-Ray

    com.amazonaws.region-code.xray

    Amazon CloudWatch Logs

    com.amazonaws.region-code.logs

    AWS Security Token Service (required when using IAM roles for service accounts)

    com.amazonaws.region-code.sts

  • Any self-managed nodes must be deployed to subnets that have the VPC interface endpoints that you require. If you create a managed node group, the VPC interface endpoint security group must allow the CIDR for the subnets, or you must add the created node security group to the VPC interface endpoint security group.

  • If your Pods use Amazon EFS volumes, then before deploying the Store an elastic file system with Amazon EFS, the driver’s kustomization.yaml file must be changed to set the container images to use the same AWS Region as the Amazon EKS cluster.

  • You can use the AWS Load Balancer Controller to deploy AWS Application Load Balancers (ALB) and Network Load Balancers to your private cluster. When deploying it, you should use command line flags to set enable-shield, enable-waf, and enable-wafv2 to false. Certificate discovery with hostnames from Ingress objects isn’t supported. This is because the controller needs to reach AWS Certificate Manager, which doesn’t have a VPC interface endpoint.

    The controller supports network load balancers with IP targets, which are required for use with Fargate. For more information, see Route application and HTTP traffic with Application Load Balancers and Create a network load balancer.

  • Cluster Autoscaler is supported. When deploying Cluster Autoscaler Pods, make sure that the command line includes --aws-use-static-instance-list=true. For more information, see Use Static Instance List on GitHub. The worker node VPC must also include the AWS STS VPC endpoint and autoscaling VPC endpoint.

  • Some container software products use API calls that access the AWS Marketplace Metering Service to monitor usage. Private clusters do not allow these calls, so you can’t use these container types in private clusters.