Amazon EKS
User Guide

Launching Amazon EKS Worker Nodes

This topic helps you to launch an Auto Scaling group of worker nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching Amazon EKS worker nodes, we recommend that you follow our Getting Started with Amazon EKS guide instead. The guide provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 On-Demand Instance prices. For more information, see Amazon EC2 Pricing.

This topic has the following prerequisites:

To launch your worker nodes

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  3. From the navigation bar, select a Region that supports Amazon EKS.

    Note

    Amazon EKS is available in the following Regions at this time:

    • US West (Oregon) (us-west-2)

    • US East (N. Virginia) (us-east-1)

    • US East (Ohio) (us-east-2)

    • EU (Frankfurt) (eu-central-1)

    • EU (Stockholm) (eu-north-1)

    • EU (Ireland) (eu-west-1)

    • EU (London) (eu-west-2)

    • EU (Paris) (eu-west-3)

    • Asia Pacific (Tokyo) (ap-northeast-1)

    • Asia Pacific (Seoul) (ap-northeast-2)

    • Asia Pacific (Mumbai) (ap-south-1)

    • Asia Pacific (Singapore) (ap-southeast-1)

    • Asia Pacific (Sydney) (ap-southeast-2)

  4. Choose Create stack.

  5. For Choose a template, select Specify an Amazon S3 template URL.

  6. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-nodegroup.yaml
  7. On the Specify Details page, fill out the following parameters accordingly, and choose Next:

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName: Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match your Amazon EKS cluster name. Otherwise, your worker nodes will be unable to join it.

    • ClusterControlPlaneSecurityGroup: Enter the security group or groups that you used when you created your Amazon EKS cluster. This AWS CloudFormation template creates a worker node security group that allows traffic to and from the cluster control plane security group specified.

    • NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize: Enter the minimum number of nodes to which your worker node Auto Scaling group can scale in.

    • NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes to which your worker node Auto Scaling group can scale out. This value must be at least 1 node greater than your desired capacity so that you can perform a rolling update of your worker nodes without reducing your node count during the update.

    • NodeInstanceType: Choose an instance type for your worker nodes. The instance type and size that you choose determines how many IP addresses are available per worker node for the containers in your pods. For more information, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades.

    • NodeImageId: Enter the current Amazon EKS worker node AMI ID for your Region. The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support) are shown in the following table. Be sure to choose the correct AMI ID for your desired Kubernetes version and AWS region.

      Note

      The Amazon EKS-optimized AMI with GPU support only supports P2 and P3 instance types. Be sure to specify these instance types in your worker node AWS CloudFormation template. Because this AMI includes third-party software that requires an end user license agreement (EULA), you must subscribe to the AMI in the AWS Marketplace and accept the EULA before you can use the AMI in your worker node groups. To subscribe to the AMI, visit the AWS Marketplace.

      Kubernetes version 1.11

      Region Amazon EKS-optimized AMI with GPU support
      US West (Oregon) (us-west-2) ami-081099ec932b99961 ami-095922d81242d0528
      US East (N. Virginia) (us-east-1) ami-0c5b63ec54dd3fc38 ami-0a0cbb44e651c5e22
      US East (Ohio) (us-east-2) ami-0b10ebfc82e446296 ami-08697e581e49ffecf
      EU (Frankfurt) (eu-central-1) ami-05e062a123092066a ami-0444fdaca5263be70
      EU (Stockholm) (eu-north-1) ami-0da59d86953d1c266 ami-fe810880
      EU (Ireland) (eu-west-1) ami-0b469c0fef0445d29 ami-03b9f52d2b707ce0a
      EU (London) (eu-west-2) ami-0420d737e57af699c ami-04ea4358308b693ef
      EU (Paris) (eu-west-3) ami-0f5a996749bdfa436 ami-03a8c02c95426b5f6
      Asia Pacific (Tokyo) (ap-northeast-1) ami-04ef881404deec134 ami-02bacb819e2777536
      Asia Pacific (Seoul) (ap-northeast-2) ami-0d87105164496b94b ami-0e35cc17cf9675a1f
      Asia Pacific (Mumbai) (ap-south-1) ami-033ea52f19ce48998 ami-0816e809501cbf4c9
      Asia Pacific (Singapore) (ap-southeast-1) ami-030c789a75c8bfbca ami-031361e2106e79386
      Asia Pacific (Sydney) (ap-southeast-2) ami-0a9b90002a9a1c111 ami-0fde112efc845caec

      Kubernetes version 1.10

      Region Amazon EKS-optimized AMI with GPU support
      US West (Oregon) (us-west-2) ami-0e36fae01a5fa0d76 ami-0796d47bbb4361153
      US East (N. Virginia) (us-east-1) ami-0de0b13514617a168 ami-04c29548028d8a4a0
      US East (Ohio) (us-east-2) ami-0d885462fa1a40e3a ami-0a6f0cc2cbef07ba9
      EU (Frankfurt) (eu-central-1) ami-074583f8d5a05e27b ami-0e24c510ebe972f26
      EU (Stockholm) (eu-north-1) ami-0e1d5399bfbe402e0 ami-f9810887
      EU (Ireland) (eu-west-1) ami-076c1952dd7a28909 ami-098171628d39d4d6c
      EU (London) (eu-west-2) ami-0bfa0f971add9fb2f ami-0286d34d9642b1717
      EU (Paris) (eu-west-3) ami-0f0e4bda9786ec624 ami-05c4fa636d6b561e3
      Asia Pacific (Tokyo) (ap-northeast-1) ami-049090cdbc5e3c080 ami-03c93f6816f8652c7
      Asia Pacific (Seoul) (ap-northeast-2) ami-0b39dee42365df927 ami-0089fa930c7f3e830
      Asia Pacific (Mumbai) (ap-south-1) ami-0c2a98be00f0b5bb4 ami-0bed4d4741161bae1
      Asia Pacific (Singapore) (ap-southeast-1) ami-0a3df91af7c8225db ami-014ed22ec2f34c4bf
      Asia Pacific (Sydney) (ap-southeast-2) ami-0f4d387d27ad36792 ami-096064ec61eaa29df

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId: Enter the ID for the VPC that your worker nodes should launch into.

    • Subnets: Choose the subnets within the above VPC that your worker nodes should launch into.

  8. On the Options page, you can choose to tag your stack resources. Choose Next.

  9. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Create.

  10. When your stack has finished creating, select it in the console and choose Outputs.

  11. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Download the configuration map:

      curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, then your kubectl is not configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a P2 or P3 instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a daemon set on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.11/nvidia-device-plugin.yml