Amazon EKS
User Guide

Launching Amazon EKS Worker Nodes

This topic helps you to launch an Auto Scaling group of worker nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching Amazon EKS worker nodes, we recommend that you follow one of our Getting Started with Amazon EKS guides instead. They provide complete end-to-end walkthroughs for creating an Amazon EKS cluster with worker nodes.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 prices. For more information, see Amazon EC2 Pricing.

Choose the tab below that corresponds to your desired worker node creation method:

eksctlAWS Management Console
eksctl

To launch worker nodes with eksctl

This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.1.37. You can check your version with the following command:

eksctl version

For more information on installing or upgrading eksctl, see Installing or Upgrading eksctl.

Note

This procedure only works for clusters that were created with eksctl.

  • Create your worker node group with the following command. Substitute the red text with your own values.

    eksctl create nodegroup \ --cluster default \ --version auto \ --name standard-workers \ --node-type t3.medium \ --node-ami auto \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4

    Note

    For more information on the available options for eksctl create nodegroup, see the project README on GitHub or view the help page with the following command.

    eksctl create nodegroup --help

    Output:

    [ℹ] using region us-west-2 [ℹ] will use version 1.12 for new nodegroup(s) based on control plane version [ℹ] nodegroup "standard-workers" will use "ami-0923e4b35a30a5f53" [AmazonLinux2/1.12] [ℹ] 1 nodegroup (standard-workers) was included [ℹ] will create a CloudFormation stack for each of 1 nodegroups in cluster "default" [ℹ] 1 task: { create nodegroup "standard-workers" } [ℹ] building nodegroup stack "eksctl-default-nodegroup-standard-workers" [ℹ] deploying stack "eksctl-default-nodegroup-standard-workers" [ℹ] adding role "arn:aws:iam::111122223333:role/eksctl-default-nodegroup-standard-NodeInstanceRole-12C2JO814XSEE" to auth ConfigMap [ℹ] nodegroup "standard-workers" has 0 node(s) [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" [ℹ] nodegroup "standard-workers" has 3 node(s) [ℹ] node "ip-192-168-52-42.us-west-2.compute.internal" is ready [ℹ] node "ip-192-168-7-27.us-west-2.compute.internal" is not ready [ℹ] node "ip-192-168-76-138.us-west-2.compute.internal" is not ready [✔] created 1 nodegroup(s) in cluster "default" [ℹ] checking security group configuration for all nodegroups [ℹ] all nodegroups have up-to-date configuration
AWS Management Console

To launch your worker nodes with the AWS Management Console

These procedures have the following prerequisites:

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  3. From the navigation bar, select a Region that supports Amazon EKS.

  4. Choose Create stack.

  5. For Choose a template, select Specify an Amazon S3 template URL.

  6. Paste the following URL into the text area and choose Next.

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml

    Note

    If you intend to only deploy worker nodes to private subnets, you should edit this template in the AWS CloudFormation designer and modify the AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.

    AssociatePublicIpAddress: 'false'
  7. On the Specify Details page, fill out the following parameters accordingly, and choose Next:

    • Stack name – Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName – Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match your Amazon EKS cluster name. Otherwise, your worker nodes will be unable to join it.

    • ClusterControlPlaneSecurityGroup – Enter the security group or groups that you used when you created your Amazon EKS cluster. This AWS CloudFormation template creates a worker node security group that allows traffic to and from the cluster control plane security group specified.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • NodeGroupName – Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize – Enter the minimum number of nodes to which your worker node Auto Scaling group can scale in.

    • NodeAutoScalingGroupDesiredCapacity – Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes to which your worker node Auto Scaling group can scale out. This value must be at least one node greater than your desired capacity so that you can perform a rolling update of your worker nodes without reducing your node count during the update.

    • NodeInstanceType – Choose an instance type for your worker nodes. The instance type and size that you choose determines how many IP addresses are available per worker node for the containers in your pods. For more information, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades.

      Important

      Some instance types might not be available in all regions.

    • NodeImageId – Enter the current Amazon EKS worker node AMI ID for your Region. The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support) are shown in the following table. Be sure to choose the correct AMI ID for your desired Kubernetes version and AWS region.

      Note

      The Amazon EKS-optimized AMI with GPU support only supports P2 and P3 instance types. Be sure to specify these instance types in your worker node AWS CloudFormation template. By using the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license agreement (EULA).

      Kubernetes version 1.13.8Kubernetes version 1.12.10Kubernetes version 1.11.10
      Kubernetes version 1.13.8
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-027683840ad78d833 ami-0af8403c143fd4a07
      US East (N. Virginia) (us-east-1) ami-0d3998d69ebe9b214 ami-0484012ada3522476
      US West (Oregon) (us-west-2) ami-00b95829322267382 ami-0d24da600cc96ae6b
      Asia Pacific (Hong Kong) (ap-east-1) ami-03f8634a8fd592414 ami-080eb165234752969
      Asia Pacific (Mumbai) (ap-south-1) ami-0062e5b0411e77c1a ami-010dbb7183ab64b39
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0a67c71d2ab43d36f ami-069303796840f8155
      Asia Pacific (Seoul) (ap-northeast-2) ami-0d66d2fefbc86831a ami-04f71dc710ff5baf4
      Asia Pacific (Singapore) (ap-southeast-1) ami-06206d907abb34bbc ami-0213fc532b1c2e05f
      Asia Pacific (Sydney) (ap-southeast-2) ami-09f2d86f2d8c4f77d ami-01fc0a4c67f82532b
      EU (Frankfurt) (eu-central-1) ami-038bd8d3a2345061f ami-07b7cbb235789cc31
      EU (Ireland) (eu-west-1) ami-0199284372364b02a ami-00bfeece5b673b69f
      EU (London) (eu-west-2) ami-0f454b09349248e29 ami-0babebc79dbf6016c
      EU (Paris) (eu-west-3) ami-00b44348ab3eb2c9f ami-03136b5b83c5b61ba
      EU (Stockholm) (eu-north-1) ami-02218be9004537a65 ami-057821acea15c1a98
      Kubernetes version 1.12.10
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0ebb1c51e5fe9c376 ami-0b42bfc7af8bb3abc
      US East (N. Virginia) (us-east-1) ami-01e370f796735b244 ami-0eb0119f55d589a03
      US West (Oregon) (us-west-2) ami-0b520e822d42998c1 ami-0c9156d7fcd3c2948
      Asia Pacific (Hong Kong) (ap-east-1) ami-0aa07b9e8bfcdaaff ami-0a5e7de0e5d22a988
      Asia Pacific (Mumbai) (ap-south-1) ami-03b7b0e3088a72394 ami-0c1bc87ff613a979b
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0f554256ac7b33081 ami-0e2f87975f5aa9908
      Asia Pacific (Seoul) (ap-northeast-2) ami-066a40f5f0e0b90f4 ami-08101c357b41e9f9a
      Asia Pacific (Singapore) (ap-southeast-1) ami-06a42a7479836d402 ami-0420c66a82472f4b2
      Asia Pacific (Sydney) (ap-southeast-2) ami-0f93997f60ca40d26 ami-04a085528a6af6499
      EU (Frankfurt) (eu-central-1) ami-04341c15c2f941589 ami-09c45f4e40a56254b
      EU (Ireland) (eu-west-1) ami-018b4a3f81f517183 ami-04668c090ff8c1f50
      EU (London) (eu-west-2) ami-0fd0b45d54f80a0e9 ami-0b925567bd252e74c
      EU (Paris) (eu-west-3) ami-0b12420c7f7281432 ami-0f975ac243bcd0da0
      EU (Stockholm) (eu-north-1) ami-01c1b0b8dcbd02b11 ami-093da2874a5426ce3
      Kubernetes version 1.11.10
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0e565ff1ccb9b6979 ami-0f9e62727a55f68d3
      US East (N. Virginia) (us-east-1) ami-08571c6cee1adbb62 ami-0c3d92683a7946ac3
      US West (Oregon) (us-west-2) ami-0566833f0c8e9031e ami-058b22acd515ec20b
      Asia Pacific (Hong Kong) (ap-east-1) ami-0e2e431905d176277 ami-0baf9ac8446e87fb5
      Asia Pacific (Mumbai) (ap-south-1) ami-073c3d075aeb53d1f ami-0c709282458d1114c
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0644b094efc34d888 ami-023f507ec007de487
      Asia Pacific (Seoul) (ap-northeast-2) ami-0ab0067299faa5229 ami-0ccbbe6530310b01d
      Asia Pacific (Singapore) (ap-southeast-1) ami-087f58c635bb8283b ami-0341435cf966cb837
      Asia Pacific (Sydney) (ap-southeast-2) ami-06caef7a88fd74af2 ami-0987b07bd338f97db
      EU (Frankfurt) (eu-central-1) ami-099b3f8db68693895 ami-060f13bd7397f782d
      EU (Ireland) (eu-west-1) ami-06b60c5852910e7b5 ami-0d84963dfda5af073
      EU (London) (eu-west-2) ami-0b56c1f39e4b1eb8e ami-0189e53a00d37a0b6
      EU (Paris) (eu-west-3) ami-036237d1951bfeabc ami-0baea83f5f5d2abfe
      EU (Stockholm) (eu-north-1) ami-0612e10dfe00c5ff6 ami-0d5b7823e58094232

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • KeyName – Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments – Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId – Enter the ID for the VPC that your worker nodes should launch into.

    • Subnets – Choose the subnets within the preceding VPC that your worker nodes should launch into. If you are launching worker nodes into only private subnets, do not include public subnets here.

  8. On the Options page, you can choose to tag your stack resources. Choose Next.

  9. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Create.

  10. When your stack has finished creating, select it in the console and choose Outputs.

  11. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Use the following command to download the configuration map:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a P2 or P3 instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml