Amazon EKS
User Guide

Launching Amazon EKS Worker Nodes

This topic helps you to launch an Auto Scaling group of worker nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching Amazon EKS worker nodes, we recommend that you follow our Getting Started with Amazon EKS guide instead. The guide provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 On-Demand Instance prices. For more information, see Amazon EC2 Pricing.

This topic has the following prerequisites:

To launch your worker nodes

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  3. From the navigation bar, select a Region that supports Amazon EKS.

  4. Choose Create stack.

  5. For Choose a template, select Specify an Amazon S3 template URL.

  6. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml
  7. On the Specify Details page, fill out the following parameters accordingly, and choose Next:

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName: Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match your Amazon EKS cluster name. Otherwise, your worker nodes will be unable to join it.

    • ClusterControlPlaneSecurityGroup: Enter the security group or groups that you used when you created your Amazon EKS cluster. This AWS CloudFormation template creates a worker node security group that allows traffic to and from the cluster control plane security group specified.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize: Enter the minimum number of nodes to which your worker node Auto Scaling group can scale in.

    • NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes to which your worker node Auto Scaling group can scale out. This value must be at least 1 node greater than your desired capacity so that you can perform a rolling update of your worker nodes without reducing your node count during the update.

    • NodeInstanceType: Choose an instance type for your worker nodes. The instance type and size that you choose determines how many IP addresses are available per worker node for the containers in your pods. For more information, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades.

      Important

      Some instance types might not be available in all regions.

    • NodeImageId: Enter the current Amazon EKS worker node AMI ID for your Region. The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support) are shown in the following table. Be sure to choose the correct AMI ID for your desired Kubernetes version and AWS region.

      Note

      The Amazon EKS-optimized AMI with GPU support only supports P2 and P3 instance types. Be sure to specify these instance types in your worker node AWS CloudFormation template. By using the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license agreement (EULA).

      Kubernetes version 1.12.7Kubernetes version 1.11.9Kubernetes version 1.10.13
      Kubernetes version 1.12.7
      Region Amazon EKS-optimized AMI with GPU support
      US West (Oregon) (us-west-2) ami-0923e4b35a30a5f53 ami-0bebf2322fd52a42e
      US East (N. Virginia) (us-east-1) ami-0abcb9f9190e867ab ami-0cb7959f92429410a
      US East (Ohio) (us-east-2) ami-04ea7cb66af82ae4a ami-0118b61dc2312dee2
      EU (Frankfurt) (eu-central-1) ami-0d741ed58ca5b342e ami-0c57db5b204001099
      EU (Stockholm) (eu-north-1) ami-0c65a309fc58f6907 ami-09354b076296f5946
      EU (Ireland) (eu-west-1) ami-08716b70cac884aaa ami-0fbc930681258db86
      EU (London) (eu-west-2) ami-0c7388116d474ee10 ami-0d832fced2cfe0f7b
      EU (Paris) (eu-west-3) ami-0560aea042fec8b12 ami-0f8fa088b406ebba2
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0bfedee6a7845c26d ami-08e41cc84f4b3f27f
      Asia Pacific (Seoul) (ap-northeast-2) ami-0a904348b703e620c ami-0c43b885e33fdc29e
      Asia Pacific (Mumbai) (ap-south-1) ami-09c3eb35bb3be46a4 ami-0d3ecaf4f3318c714
      Asia Pacific (Singapore) (ap-southeast-1) ami-07b922b9b94d9a6d2 ami-0655b4dbbe2d46703
      Asia Pacific (Sydney) (ap-southeast-2) ami-0f0121e9e64ebd3dc ami-07079cd9ff1b312da
      Kubernetes version 1.11.9
      Region Amazon EKS-optimized AMI with GPU support
      US West (Oregon) (us-west-2) ami-05ecac759c81e0b0c ami-08377056d89909b2a
      US East (N. Virginia) (us-east-1) ami-02c1de421df89c58d ami-06ec2ea207616c078
      US East (Ohio) (us-east-2) ami-03b1b6cc34c010f9c ami-0e6993a35aae3407b
      EU (Frankfurt) (eu-central-1) ami-0c2709025eb548246 ami-0bf09c13f4204ce9d
      EU (Stockholm) (eu-north-1) ami-084bd3569d08c6e67 ami-0a1714bb5be631b59
      EU (Ireland) (eu-west-1) ami-0e82e73403dd69fa3 ami-0b4d0f56587640d5a
      EU (London) (eu-west-2) ami-0da9aa88dd2ec8297 ami-00e98f9e6fd2319e5
      EU (Paris) (eu-west-3) ami-099369bc73d1cc66f ami-0039e2556e6290828
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0d555d5f56c843803 ami-07fc636e8f6d3e18b
      Asia Pacific (Seoul) (ap-northeast-2) ami-0144ae839b1111571 ami-002057772097fcef9
      Asia Pacific (Mumbai) (ap-south-1) ami-02071c0110dc365ba ami-04fe7f4c75aac7196
      Asia Pacific (Singapore) (ap-southeast-1) ami-00c91afdb73cf7f93 ami-08d5da0b12751a31f
      Asia Pacific (Sydney) (ap-southeast-2) ami-05f4510fcfe56961c ami-04024dd8e0b9e36ff
      Kubernetes version 1.10.13
      Region Amazon EKS-optimized AMI with GPU support
      US West (Oregon) (us-west-2) ami-05a71d034119ffc12 ami-0901518d7557125c8
      US East (N. Virginia) (us-east-1) ami-03a1e71fb42fc37dd ami-00f74c3728d4ca27d
      US East (Ohio) (us-east-2) ami-093d55c2ba99ab2c8 ami-0a788defb66cdfffb
      EU (Frankfurt) (eu-central-1) ami-03bdf8079f6c013c5 ami-0a8536a894bd4ea06
      EU (Stockholm) (eu-north-1) ami-0be77fe86d741fc81 ami-05baf7a6c293fe2ed
      EU (Ireland) (eu-west-1) ami-06368da7f495b68e9 ami-0f6f3929a9d7a418e
      EU (London) (eu-west-2) ami-0f1f2189b4741bc60 ami-0a12396b818bc2383
      EU (Paris) (eu-west-3) ami-03a9acb0f6e0d424d ami-086d5edcaacd0ccfd
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0c9fb6a3fda95d373 ami-073f06a1edd22ae2e
      Asia Pacific (Seoul) (ap-northeast-2) ami-00ea4ea959f28b4cf ami-0baff950f5217e54e
      Asia Pacific (Mumbai) (ap-south-1) ami-0f07478f5c5eb9e20 ami-033bd2c2a3431923e
      Asia Pacific (Singapore) (ap-southeast-1) ami-05dac5d0ada75e22f ami-09defa93988984fa1
      Asia Pacific (Sydney) (ap-southeast-2) ami-00513f18e1900ce1e ami-00d9364d705e902c9

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId: Enter the ID for the VPC that your worker nodes should launch into.

    • Subnets: Choose the subnets within the above VPC that your worker nodes should launch into.

  8. On the Options page, you can choose to tag your stack resources. Choose Next.

  9. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Create.

  10. When your stack has finished creating, select it in the console and choose Outputs.

  11. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Use the following command to download the configuration map:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a P2 or P3 instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a daemon set on your cluster with the following command.

    Note

    If your cluster is running a different Kubernetes version than 1.12, be sure to substitute your cluster's version in the following URL.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.12/nvidia-device-plugin.yml