Amazon EKS
User Guide

Launching Amazon EKS Linux Worker Nodes

This topic helps you to launch an Auto Scaling group of Linux worker nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching Amazon EKS Linux worker nodes, we recommend that you follow one of our Getting Started with Amazon EKS guides instead. The guides provide complete end-to-end walkthroughs for creating an Amazon EKS cluster with worker nodes.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 prices. For more information, see Amazon EC2 Pricing.

Choose the tab below that corresponds to your desired worker node creation method:

Amazon EKS Managed Node GroupseksctlUnmanaged nodes
Amazon EKS Managed Node Groups

Managed Node Groups are supported on Amazon EKS clusters beginning with Kubernetes version 1.14 and platform version eks.3. Existing clusters can update to version 1.14 to take advantage of this feature. For more information, see Updating an Amazon EKS Cluster Kubernetes Version. Existing 1.14 clusters will be automatically updated to eks.3 over time to support this feature.

To launch your managed node group

  1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a cluster that is not yet ACTIVE.

  2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.

  3. Choose the name of the cluster that you want to create your managed node group in.

  4. On the cluster page, choose Add node group.

  5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.

    • Name — Enter a unique name for your managed node group.

    • Node IAM role name — Choose the node instance role to use with your node group. For more information, see Amazon EKS Worker Node IAM Role.

    • Subnets — Choose the subnets to launch your managed nodes into.

      Important

      If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes Cluster Autoscaler, you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature.

    • Remote Access — (Optional) You can enable SSH access to the nodes in your managed node group. This allows you to connect to your instances and gather diagnostic information if there are issues. Complete the following steps to enable remote access.

      Note

      We highly recommend enabling remote access when you create your node group. You cannot enable remote access after the node group is created.

      1. Select the check box to Allow remote access to nodes.

      2. For SSH key pair, choose an Amazon EC2 SSH key to use. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      3. For Allow remote access from, choose All to allow SSH access from anywhere on the Internet (0.0.0.0/0), or select a security group to allow SSH access from instances that belong to that security group.

    • Tags — (Optional) You can choose to tag your Amazon EKS managed node group. These tags do not propagate to other resources in the node group, such as Auto Scaling groups or instances. For more information, see Tagging Your Amazon EKS Resources.

    • Kubernetes labels — (Optional) You can choose to apply Kubernetes labels to the nodes in your managed node group.

  6. On the Set compute configuration page, fill out the parameters accordingly, and then choose Next.

    • AMI type — Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux 2 GPU Enabled (AL2_x86_64_GPU) for GPU instances.

    • Instance type — Choose the instance type to use in your managed node group. Larger instance types can accommodate more pods.

    • Disk size — Enter the disk size (in GiB) to use for your worker node root volume.

  7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.

    Note

    Amazon EKS does not automatically scale your node group in or out. However, you can configure the Kubernetes Cluster Autoscaler to do this for you.

    • Minimum size — Specify the minimum number of worker nodes that the managed node group can scale in to.

    • Maximum size — Specify the maximum number of worker nodes that the managed node group can scale out to.

    • Desired size — Specify the current number of worker nodes that the managed node group should maintain at launch.

  8. On the Review and create page, review your managed node group configuration and choose Create.

  9. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  10. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml
eksctl

To launch worker nodes with eksctl

This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.11.0. You can check your version with the following command:

eksctl version

For more information on installing or upgrading eksctl, see Installing or Upgrading eksctl.

Note

This procedure only works for clusters that were created with eksctl.

  1. Create your worker node group with the following command. Replace the example values with your own values.

    eksctl create nodegroup \ --cluster default \ --version auto \ --name standard-workers \ --node-type t3.medium \ --node-ami auto \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4

    Note

    For more information on the available options for eksctl create nodegroup, see the project README on GitHub or view the help page with the following command.

    eksctl create nodegroup --help

    Output:

    [ℹ] using region us-west-2 [ℹ] will use version 1.12 for new nodegroup(s) based on control plane version [ℹ] nodegroup "standard-workers" will use "ami-0923e4b35a30a5f53" [AmazonLinux2/1.12] [ℹ] 1 nodegroup (standard-workers) was included [ℹ] will create a CloudFormation stack for each of 1 nodegroups in cluster "default" [ℹ] 1 task: { create nodegroup "standard-workers" } [ℹ] building nodegroup stack "eksctl-default-nodegroup-standard-workers" [ℹ] deploying stack "eksctl-default-nodegroup-standard-workers" [ℹ] adding role "arn:aws:iam::111122223333:role/eksctl-default-nodegroup-standard-NodeInstanceRole-12C2JO814XSEE" to auth ConfigMap [ℹ] nodegroup "standard-workers" has 0 node(s) [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" [ℹ] nodegroup "standard-workers" has 3 node(s) [ℹ] node "ip-192-168-52-42.us-west-2.compute.internal" is ready [ℹ] node "ip-192-168-7-27.us-west-2.compute.internal" is not ready [ℹ] node "ip-192-168-76-138.us-west-2.compute.internal" is not ready [✔] created 1 nodegroup(s) in cluster "default" [ℹ] checking security group configuration for all nodegroups [ℹ] all nodegroups have up-to-date configuration
  2. (Optional) Launch a Guest Book Application — Deploy a sample application to test your cluster and Linux worker nodes.

Unmanaged nodes

These procedures have the following prerequisites:

To launch your unmanaged worker nodes with the AWS Management Console

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Choose the tab below that corresponds to your cluster's Kubernetes version, then choose a Launch workers link that corresponds to your region and AMI type. This opens the AWS CloudFormation console and pre-populates several fields for you.

    Kubernetes version 1.14.7Kubernetes version 1.13.11Kubernetes version 1.12.10
    Kubernetes version 1.14.7
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    Canada (Central) (ca-central-1) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    South America (São Paulo) (sa-east-1) Launch workers Launch workers
    Kubernetes version 1.13.11
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    Canada (Central) (ca-central-1) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    South America (São Paulo) (sa-east-1) Launch workers Launch workers
    Kubernetes version 1.12.10
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    Canada (Central) (ca-central-1) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    South America (São Paulo) (sa-east-1) Launch workers Launch workers

    Note

    If you intend to only deploy worker nodes to private subnets, you should edit this template in the AWS CloudFormation designer and modify the AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.

    AssociatePublicIpAddress: 'false'
  3. On the Quick create stack page, fill out the following parameters accordingly:

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName: Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match the name you used in Step 1: Create Your Amazon EKS Cluster; otherwise, your worker nodes cannot join the cluster.

    • ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC.

    • NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node Auto Scaling group can scale in to.

    • NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker node Auto Scaling group can scale out to.

    • NodeInstanceType: Choose an instance type for your worker nodes.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades.

      Important

      Some instance types might not be available in all regions.

    • NodeImageIdSSMParam: Pre-populated based on the version that you launched your worker nodes with in step 2. This value is the Amazon EC2 Systems Manager Parameter Store parameter to use for your worker node AMI ID. For example, the /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended/image_id parameter is for the latest recommended Kubernetes version 1.14 Amazon EKS-optimized AMI.

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • NodeImageId: (Optional) If you are using your own custom AMI (instead of the Amazon EKS-optimized AMI), enter a worker node AMI ID for your Region. If you specify a value here, it overrides any values in the NodeImageIdSSMParam field.

    • NodeVolumeSize: Specify a root volume size for your worker nodes, in GiB.

    • KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh.

    • VpcId: Enter the ID for the VPC that you created in Create your Amazon EKS Cluster VPC.

    • Subnets: Choose the subnets that you created in Create your Amazon EKS Cluster VPC. If you created your VPC using the steps described at Creating a VPC for Your Amazon EKS Cluster, then specify only the private subnets within the VPC for your worker nodes to launch into.

  4. Acknowledge that the stack might create IAM resources, and then choose Create stack.

  5. When your stack has finished creating, select it in the console and choose Outputs.

  6. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Use the following command to download the configuration map:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml
  4. (Optional) Launch a Guest Book Application — Deploy a sample application to test your cluster and Linux worker nodes.