Amazon EKS
User Guide

Launching Amazon EKS Worker Nodes

This topic helps you to launch an Auto Scaling group of worker nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching Amazon EKS worker nodes, we recommend that you follow one of our Getting Started with Amazon EKS guides instead. They provide complete end-to-end walkthroughs for creating an Amazon EKS cluster with worker nodes.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 On-Demand Instance prices. For more information, see Amazon EC2 Pricing.

Choose the tab below that corresponds to your desired worker node creation method:

eksctlAWS Management Console
eksctl

To launch worker nodes with eksctl

This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.1.37. You can check your version with the following command:

eksctl version

For more information on installing or upgrading eksctl, see Installing or Upgrading eksctl.

Note

This procedure only works for clusters that were created with eksctl.

  • Create your worker node group with the following command. Substitute the red text with your own values.

    eksctl create nodegroup \ --cluster default \ --version auto \ --name standard-workers \ --node-type t3.medium \ --node-ami auto \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4

    Note

    For more information on the available options for eksctl create nodegroup, see the project README on GitHub or view the help page with the following command.

    eksctl create nodegroup --help

    Output:

    [ℹ] using region us-west-2 [ℹ] will use version 1.12 for new nodegroup(s) based on control plane version [ℹ] nodegroup "standard-workers" will use "ami-0923e4b35a30a5f53" [AmazonLinux2/1.12] [ℹ] 1 nodegroup (standard-workers) was included [ℹ] will create a CloudFormation stack for each of 1 nodegroups in cluster "default" [ℹ] 1 task: { create nodegroup "standard-workers" } [ℹ] building nodegroup stack "eksctl-default-nodegroup-standard-workers" [ℹ] deploying stack "eksctl-default-nodegroup-standard-workers" [ℹ] adding role "arn:aws:iam::111122223333:role/eksctl-default-nodegroup-standard-NodeInstanceRole-12C2JO814XSEE" to auth ConfigMap [ℹ] nodegroup "standard-workers" has 0 node(s) [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" [ℹ] nodegroup "standard-workers" has 3 node(s) [ℹ] node "ip-192-168-52-42.us-west-2.compute.internal" is ready [ℹ] node "ip-192-168-7-27.us-west-2.compute.internal" is not ready [ℹ] node "ip-192-168-76-138.us-west-2.compute.internal" is not ready [✔] created 1 nodegroup(s) in cluster "default" [ℹ] checking security group configuration for all nodegroups [ℹ] all nodegroups have up-to-date configuration
AWS Management Console

To launch your worker nodes with the AWS Management Console

These procedures have the following prerequisites:

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  3. From the navigation bar, select a Region that supports Amazon EKS.

  4. Choose Create stack.

  5. For Choose a template, select Specify an Amazon S3 template URL.

  6. Paste the following URL into the text area and choose Next.

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml

    Note

    If you created a VPC with private subnets for deploying workers, you should save a copy of this template locally and modify the AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.

    AssociatePublicIpAddress: 'false'
  7. On the Specify Details page, fill out the following parameters accordingly, and choose Next:

    • Stack name – Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName – Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match your Amazon EKS cluster name. Otherwise, your worker nodes will be unable to join it.

    • ClusterControlPlaneSecurityGroup – Enter the security group or groups that you used when you created your Amazon EKS cluster. This AWS CloudFormation template creates a worker node security group that allows traffic to and from the cluster control plane security group specified.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • NodeGroupName – Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize – Enter the minimum number of nodes to which your worker node Auto Scaling group can scale in.

    • NodeAutoScalingGroupDesiredCapacity – Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes to which your worker node Auto Scaling group can scale out. This value must be at least one node greater than your desired capacity so that you can perform a rolling update of your worker nodes without reducing your node count during the update.

    • NodeInstanceType – Choose an instance type for your worker nodes. The instance type and size that you choose determines how many IP addresses are available per worker node for the containers in your pods. For more information, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades.

      Important

      Some instance types might not be available in all regions.

    • NodeImageId – Enter the current Amazon EKS worker node AMI ID for your Region. The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support) are shown in the following table. Be sure to choose the correct AMI ID for your desired Kubernetes version and AWS region.

      Note

      The Amazon EKS-optimized AMI with GPU support only supports P2 and P3 instance types. Be sure to specify these instance types in your worker node AWS CloudFormation template. By using the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license agreement (EULA).

      Kubernetes version 1.13.7Kubernetes version 1.12.7Kubernetes version 1.11.9Kubernetes version 1.10.13
      Kubernetes version 1.13.7
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-07ebcae043cf995aa ami-01f82bb66c17faf20
      US East (N. Virginia) (us-east-1) ami-08c4955bcc43b124e ami-02af865c0f3b337f2
      US West (Oregon) (us-west-2) ami-089d3b6350c1769a6 ami-08e5329e1dbf22c6a
      Asia Pacific (Mumbai) (ap-south-1) ami-0410a80d323371237 ami-094beaac92afd72eb
      Asia Pacific (Tokyo) (ap-northeast-1) ami-04c0f02f5e148c80a ami-0f409159b757b0292
      Asia Pacific (Seoul) (ap-northeast-2) ami-0b7997a20f8424fb1 ami-066623eb3f5a82878
      Asia Pacific (Singapore) (ap-southeast-1) ami-087e0fca60fb5737a ami-0d660fb17b06078d9
      Asia Pacific (Sydney) (ap-southeast-2) ami-082dfea752d9163f6 ami-0d11124f8f06f8a4f
      EU (Frankfurt) (eu-central-1) ami-02d5e7ca7bc498ef9 ami-085b174e2e2b41f33
      EU (Ireland) (eu-west-1) ami-09bbefc07310f7914 ami-093009474b04965b3
      EU (London) (eu-west-2) ami-0f03516f22468f14e ami-08a5d542db43e17ab
      EU (Paris) (eu-west-3) ami-051015c2c2b73aaea ami-05cbcb1bc3dbe7a3d
      EU (Stockholm) (eu-north-1) ami-0c31ee32297e7397d ami-0f66f596ae68c0353
      Kubernetes version 1.12.7
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0e8d353285e26a68c ami-09279e76127f808b2
      US East (N. Virginia) (us-east-1) ami-0200e65a38edfb7e1 ami-0ae641b4b7ed88d72
      US West (Oregon) (us-west-2) ami-0f11fd98b02f12a4c ami-08142df4834399a6b
      Asia Pacific (Mumbai) (ap-south-1) ami-0644de45344ce867e ami-000721b659ba73311
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0dfbca8d183884f02 ami-0b11aeca80a60fbb5
      Asia Pacific (Seoul) (ap-northeast-2) ami-0a9d12fe9c2a31876 ami-08ace4be4e6e52c62
      Asia Pacific (Singapore) (ap-southeast-1) ami-040bdde117f3828ab ami-054db05dce73fc060
      Asia Pacific (Sydney) (ap-southeast-2) ami-01bfe815f644becc0 ami-0045324a51592dbeb
      EU (Frankfurt) (eu-central-1) ami-09ed3f40a2b3c11f1 ami-0bd21d3112638aa26
      EU (Ireland) (eu-west-1) ami-091fc251b67b776c3 ami-0ae2f64856228879f
      EU (London) (eu-west-2) ami-0bc8d0262346bd65e ami-06cc142c64830e356
      EU (Paris) (eu-west-3) ami-0084dea61e480763e ami-02461867f991941f2
      EU (Stockholm) (eu-north-1) ami-022cd6a50742d611a ami-04870dc2b156b47fb
      Kubernetes version 1.11.9
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-088dad958fbfa643e ami-05ad04ed51d006bc9
      US East (N. Virginia) (us-east-1) ami-053e2ac42d872cc20 ami-06fb2eb20652dafea
      US West (Oregon) (us-west-2) ami-0743039b7c66a18f5 ami-0d6743e4d45d710f4
      Asia Pacific (Mumbai) (ap-south-1) ami-01d152acba5840ba2 ami-0d888cb5eaaba12d4
      Asia Pacific (Tokyo) (ap-northeast-1) ami-07765e1384d2e372c ami-05ab4ae12fa19bfb5
      Asia Pacific (Seoul) (ap-northeast-2) ami-0656df091f27461cd ami-0bfd390f3bd942923
      Asia Pacific (Singapore) (ap-southeast-1) ami-084e9f3625a1a4a09 ami-0726645aa38e7fe38
      Asia Pacific (Sydney) (ap-southeast-2) ami-03050c93b7e745696 ami-0d2ed580683a2ef3c
      EU (Frankfurt) (eu-central-1) ami-020f08a17c3c4251c ami-096075e3334201678
      EU (Ireland) (eu-west-1) ami-07d0c92a42077ec9b ami-0fb8e730ee4b17f98
      EU (London) (eu-west-2) ami-0ff8a4dc1632ee425 ami-0c420fc6a2ab8a140
      EU (Paris) (eu-west-3) ami-0569332dde21e3f1a ami-009bd30954d1cdf61
      EU (Stockholm) (eu-north-1) ami-0fc8c638bc80fcecf ami-07fa78fe686748c79
      Kubernetes version 1.10.13
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0295a10750423107d ami-0a0d326e98757aa1b
      US East (N. Virginia) (us-east-1) ami-05c9fba3332ccbc43 ami-0e261247a4b523354
      US West (Oregon) (us-west-2) ami-0fc349241eb7b1222 ami-067089d967e068569
      Asia Pacific (Mumbai) (ap-south-1) ami-0a183946b284a9841 ami-014cc26f091950263
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0f93f5579e6e79e96 ami-02fe5649049614901
      Asia Pacific (Seoul) (ap-northeast-2) ami-0412ddfd70b9c54bd ami-011a0f131a7148431
      Asia Pacific (Singapore) (ap-southeast-1) ami-0538e8e564078659c ami-0654c7681c0b39e0c
      Asia Pacific (Sydney) (ap-southeast-2) ami-009caed75bdc3a2f0 ami-0d120c3ce6fba36d8
      EU (Frankfurt) (eu-central-1) ami-032fc49751b7a5f83 ami-0be7b531dd58c5df1
      EU (Ireland) (eu-west-1) ami-03f9c85cd73fb9f4a ami-0b01f474bfc6c1260
      EU (London) (eu-west-2) ami-05c9cec73d17bf97f ami-0513d2fbf2aa77b8c
      EU (Paris) (eu-west-3) ami-0df95e4cd302d42f7 ami-0032d4bbdc242c41c
      EU (Stockholm) (eu-north-1) ami-0ef218c64404e4bdf ami-0b9102084fa8d4e01

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • KeyName – Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments – Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId – Enter the ID for the VPC that your worker nodes should launch into.

    • Subnets – Choose the subnets within the preceding VPC that your worker nodes should launch into. If you are launching worker nodes into only private subnets, do not include public subnets here.

  8. On the Options page, you can choose to tag your stack resources. Choose Next.

  9. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Create.

  10. When your stack has finished creating, select it in the console and choose Outputs.

  11. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Use the following command to download the configuration map:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a P2 or P3 instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a daemon set on your cluster with the following command.

    Note

    If your cluster is running a different Kubernetes version than 1.13, be sure to substitute your cluster's version in the following URL.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.13/nvidia-device-plugin.yml