Creating a managed node group - Amazon EKS

Creating a managed node group

This topic describes how you can launch an Amazon EKS managed node group of nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching an Amazon EKS managed node group, we recommend that you follow one of our Getting started with Amazon EKS guides instead. The guides provide walkthroughs for creating an Amazon EKS cluster with nodes.

Important

Prerequisites

  • An existing cluster. If you don't have an existing cluster, follow one of the Getting started with Amazon EKS guides to create your cluster and node group.

  • (Optional) If you want to significantly increase the number of pods that you can run per instance, then you must have version 1.9.0 or later of the Amazon VPC CNI add-on installed and configured appropriately and create AWS Nitro System instances. For more information, see Increase the amount of available IP addresses for your Amazon EC2 nodes.

  • (Optional) If you want to assign IP addresses to pods from a different subnet than the instance's, then you must complete the procedure in CNI custom networking before deploying your node group.

You can create a managed node group with eksctl or the AWS Management Console.

eksctl

To create a managed node group with eksctl

This procedure requires eksctl version 0.74.0 or later. You can check your version with the following command:

eksctl version

For more information on installing or upgrading eksctl, see Installing or upgrading eksctl.

  1. (Optional) If the AmazonEKS_CNI_Policy managed IAM policy is attached to your Amazon EKS node IAM role, we recommend assigning it to an IAM role that you associate to the Kubernetes aws-node service account instead. For more information, see Configuring the Amazon VPC CNI plugin to use IAM roles for service accounts.

  2. Create your managed node group with or without using a custom launch template. Manually specifying a launch template allows for greater customization of a node group. For example, it can allow deploying a custom AMI or providing arguments to the boostrap.sh script in an Amazon EKS optimized AMI. For a complete list of all available options and defaults, enter the following command.

    eksctl create nodegroup --help

    Replace the <example values> (including the <>) with your own values.

    Important

    If you don't use a custom launch template when first creating a managed node group, don't use one at a later time for the node group. If you didn't specify a custom launch template, the system auto-generates a launch template that we don't recommend that you modify manually. Manually modifying this auto-generated launch template might cause errors.

    • Without a launch templateeksctl creates a default Amazon EC2 launch template in your account and deploys the node group using a launch template that it creates based on options that you specify. Before specifying a value for --node-type, see Choosing an Amazon EC2 instance type.

      Replace <my-key> with the name of your Amazon EC2 key pair or public key. This key is used to SSH into your nodes after they launch. If you don't already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see Amazon EC2 key pairs in the Amazon EC2 User Guide for Linux Instances.

      If you plan to assign IAM roles to all of your Kubernetes service accounts so that pods only have the minimum permissions that they need, and no pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current Region, then we recommend blocking pod access to IMDS. For more information, see Restrict access to the instance profile assigned to the worker node. If you want to block pod access to IMDS, then add the --disable-pod-imds option to the following command.

      eksctl create nodegroup \ --cluster <my-cluster> \ --region <region-code> \ --name <my-mng> \ --node-type <m5.large> \ --nodes <3> \ --nodes-min <2> \ --nodes-max <4> \ --ssh-access \ --ssh-public-key <my-key>

      Your instances can optionally assign a significantly higher number of IP addresses to pods, assign IP addresses to pods from a different CIDR block than the instance's, and be deployed to a cluster without internet access. For more information, see Increase the amount of available IP addresses for your Amazon EC2 nodes, CNI custom networking, and Private clusters for additional options to add to the previous command.

      Managed node groups calculates and applies a single value for the maximum number of pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all of the instance types is applied as the maximum number of pods that can run on every instance type in the node group. Managed node groups calculates the value using the script referenced in Amazon EKS recommended maximum pods for each Amazon EC2 instance type.

    • With a launch template – The launch template must already exist and must meet the requirements specified in Launch template configuration basics. If you plan to assign IAM roles to all of your Kubernetes service accounts so that pods only have the minimum permissions that they need, and no pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current Region, then we recommend blocking pod access to IMDS. For more information, see Restrict access to the instance profile assigned to the worker node. If you want to block pod access to IMDS, then specify the necessary settings in the launch template.

      1. Create a file named eks-nodegroup.yaml with the following contents. Several settings that you specify when deploying without a launch template are moved into the launch template. If you don't specify a version, the template's default version is used.

        apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: <my-cluster> region: <region-code> managedNodeGroups: - name: <node-group-lt> launchTemplate: id: lt-<id> version: "<1>"

        For a complete list of eksctl config file settings, see Config file schema in the eksctl documentation. Your instances can optionally assign a significantly higher number of IP addresses to pods, assign IP addresses to pods from a different CIDR block than the instance's, use the containerd runtime, and be deployed to a cluster without outbound internet access. For more information, see Increase the amount of available IP addresses for your Amazon EC2 nodes, CNI custom networking, Enable the containerd runtime bootstrap flag, and Private clusters for additional options to add to the config file.

        If you didn't specify an AMI ID in your launch template, managed node groups calculates and applies a single value for the maximum number of pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all of the instance types is applied as the maximum number of pods that can run on every instance type in the node group. Managed node groups calculates the value using the script referenced in Amazon EKS recommended maximum pods for each Amazon EC2 instance type.

        If you specified an AMI ID in your launch template, specify the maximum number of pods that can run on each node of your node group if you're using custom networking or want to increase the number of IP addresses assigned to your instance. For more information, see Amazon EKS recommended maximum pods for each Amazon EC2 instance type.

      2. Deploy the nodegroup with the following command.

        eksctl create nodegroup --config-file eks-nodegroup.yaml
AWS Management Console

To create your managed node group using the AWS Management Console

  1. Wait for your cluster status to show as ACTIVE. You can't create a managed node group for a cluster that isn't already ACTIVE.

  2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.

  3. Choose the name of the cluster that you want to create your managed node group in.

  4. Select the Configuration tab.

  5. On the Configuration tab, select the Compute tab, and then choose Add Node Group.

  6. On the Configure node group page, fill out the parameters accordingly, and then choose Next.

    • Name – Enter a unique name for your managed node group.

    • Node IAM role name – Choose the node instance role to use with your node group. For more information, see Amazon EKS node IAM role.

      Important

      We recommend using a role that's not currently in use by any self-managed node group. Otherwise, you plan to use with a new self-managed node group. For more information, see Deleting a managed node group.

    • Use launch template – (Optional) Choose if you want to use an existing launch template. Then, select a Launch template version (Optional). If you don't select a version, then Amazon EKS uses the template's default version. Launch templates allow for more customization of your node group, such as allowing you to deploy a custom AMI, assign a significantly higher number of IP addresses to pods, assign IP addresses to pods from a different CIDR block than the instance's, enable the containerd runtime for your instances, and deploying nodes to a cluster without outbound internet access. For more information, see Increase the amount of available IP addresses for your Amazon EC2 nodes, CNI custom networking, Enable the containerd runtime bootstrap flag, and Private clusters.

      The launch template must meet the requirements in Launch template support. If you don't use your own launch template, the Amazon EKS API creates a default Amazon EC2 launch template in your account and deploys the node group using the default launch template.

      If you implement IAM roles for service accounts, assign necessary permissions directly to all pods that require access to AWS services, and no pods in your cluster require access to IMDS for other reasons, such as retrieving the current Region, then you can also disable access to IMDS for pods that don't use host networking in a launch template. For more information, see Restrict access to the instance profile assigned to the worker node.

    • Kubernetes labels – (Optional) You can choose to apply Kubernetes labels to the nodes in your managed node group.

    • Kubernetes taints – (Optional) You can choose to apply Kubernetes taints with the effect of either No_Schedule, Prefer_No_Schedule, or No_Execute to the nodes in your managed node group.

    • Tags – (Optional) You can choose to tag your Amazon EKS managed node group. These tags don't propagate to other resources in the node group, such as Auto Scaling groups or instances. For more information, see Tagging your Amazon EKS resources.

    • Node group update configuration – (Optional) You can select the number or percentage of nodes to be updated in parallel. Select either Number or Percentage to enter a value. These nodes won't be available during the update.

  7. On the Set compute and scaling configuration page, fill out the parameters accordingly, and then choose Next.

    • AMI type – Choose Amazon Linux 2 (AL2_x86_64) for Linux non-GPU instances, Amazon Linux 2 GPU Enabled (AL2_x86_64_GPU) for Linux GPU instances, Amazon Linux 2 (AL2_ARM_64) for Linux Arm instances, Bottlerocket (ARM_64) for Bottlerocket Arm instances, or Bottlerocket (x86_64) for Bottlerocket x86_64 instances.

      If you are deploying Arm instances, be sure to review the considerations in Amazon EKS optimized Arm Amazon Linux AMIs before deploying.

      If you specified a launch template on the previous page, and specified an AMI in the launch template, then you can't select a value. The value from the template is displayed. The AMI specified in the template must meet the requirements in Specifying an AMI.

    • Capacity type – Select a capacity type. For more information about choosing a capacity type, see Managed node group capacity types. You can't mix different capacity types within the same node group. If you want to use both capacity types, create separate node groups, each with their own capacity and instance types.

    • Instance type – By default, one or more instance type is specified. To remove a default instance type, select the X on the right side of the instance type. Choose the instance types to use in your managed node group. Before choosing an instance type, review Choosing an Amazon EC2 instance type.

      The console displays a set of commonly used instance types. For the complete set of supported instance types, see the list in eni-max-pods.txt on GitHub. If you need to create a managed node group with an instance type that's not displayed, then use eksctl, the AWS CLI, AWS CloudFormation, or an SDK to create the node group. If you specified a launch template on the previous page, then you can't select a value because the instance type must be specified in the launch template. The value from the launch template is displayed. If you selected Spot for Capacity type, then we recommend specifying multiple instance types to enhance availability.

    • Disk size – Enter the disk size (in GiB) to use for your node's root volume.

      If you specified a launch template on the previous page, then you can't select a value because it must be specified in the launch template.

    • Minimum size – Specify the minimum number of nodes that the managed node group can scale in to.

    • Maximum size – Specify the maximum number of nodes that the managed node group can scale out to.

    • Desired size – Specify the current number of nodes that the managed node group should maintain at launch.

      Note

      Amazon EKS doesn't automatically scale your node group in or out. However, you can configure the Kubernetes Cluster Autoscaler to do this for you.

    • For Maximum unavailable, select one of the following options and specify a Value:

      • Number – Select and specify the number of nodes in your node group that can be updated in parallel. These nodes will be unavailable during update.

      • Percentage – Select and specify the percentage of nodes in your node group that can be updated in parallel. These nodes will be unavailable during update. This is useful if you have a large number of nodes in your node group.

  8. On the Specify networking page, fill out the parameters accordingly, and then choose Next.

    • Subnets – Choose the subnets to launch your managed nodes into.

      Important

      If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes Cluster Autoscaler, you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature.

      Important
      • If you choose a public subnet, and your cluster has only the public API server endpoint enabled, then the subnet must have MapPublicIPOnLaunch set to true for the instances to successfully join a cluster. If the subnet was created using eksctl or the Amazon EKS vended AWS CloudFormation templates on or after March 26, 2020, then this setting is already set to true. If the subnets were created with eksctl or the AWS CloudFormation templates before March 26, 2020, then you need to change the setting manually. For more information, see Modifying the public IPv4 addressing attribute for your subnet.

      • If you use a launch template and specify multiple network interfaces, Amazon EC2 will not auto-assign a public IPv4 address, even if MapPublicIpOnLaunch is set to true. For nodes to join the cluster in this scenario, you must either enable the cluster's private API server endpoint, or launch nodes in a private subnet with outbound internet access provided through an alternative method, such as a NAT Gateway. For more information, see Amazon EC2 instance IP addressing in the Amazon EC2 User Guide for Linux Instances.

    • Configure SSH access to nodes (Optional). Enabling SSH allows you to connect to your instances and gather diagnostic information if there are issues. Complete the following steps to enable remote access. We highly recommend enabling remote access when you create your node group. You cannot enable remote access after the node group is created.

      If you chose to use a launch template, then this option isn't shown. To enable remote access to your nodes, specify a key pair in the launch template and ensure that the proper port is open to the nodes in the security groups that you specify in the launch template. For more information, see Using custom security groups.

    • For SSH key pair (Optional), choose an Amazon EC2 SSH key to use. For more information, see Amazon EC2 key pairs in the Amazon EC2 User Guide for Linux Instances. If you chose to use a launch template, then you can't select one. When an Amazon EC2 SSH key is provided for node groups using Bottlerocket AMIs, the administrative container is also enabled. For more information, see Admin container on GitHub.

    • For Allow SSH remote access from, if you want to limit access to specific instances, then select the security groups that are associated to those instances. If you don't select specific security groups, then SSH access is allowed from anywhere on the internet (0.0.0.0/0).

  9. On the Review and create page, review your managed node group configuration and choose Create.

    If nodes fail to join the cluster, then see Nodes fail to join cluster in the Troubleshooting guide.

  10. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  11. (GPU nodes only) If you chose a GPU instance type and the Amazon EKS optimized accelerated AMI, then you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.9.0/nvidia-device-plugin.yml
  12. (Optional) After you add Linux worker nodes to your cluster, follow the procedures in Windows support to add Windows support to your cluster and to add Windows worker nodes. All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to run Windows workloads in your cluster.

Now that you have a working Amazon EKS cluster with nodes, you're ready to start installing Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help you to extend the functionality of your cluster.

  • The IAM entity (user or role) that created the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions). Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. If you want other users to have access to your cluster, then you must add them to the aws-auth ConfigMap. For more information, see Managing users or IAM roles for your cluster.

  • Restrict access to the instance metadata service – If you plan to assign IAM roles to all of your Kubernetes service accounts so that pods only have the minimum permissions that they need, and no pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current Region, then we recommend blocking pod access to IMDS. For more information, see Restrict access to the instance profile assigned to the worker node.

  • Cluster Autoscaler – Configure the Kubernetes Cluster Autoscaler to automatically adjust the number of nodes in your node groups.

  • Deploy a sample application – Deploy a sample Linux application to test your cluster and Linux nodes.

  • Cluster management – Learn how to use important tools for managing your cluster.