Updating an existing self-managed node group - Amazon EKS

Updating an existing self-managed node group

This topic helps you to update an existing AWS CloudFormation self-managed node stack with a new AMI. You can use this procedure to update your nodes to a new version of Kubernetes following a cluster update, or you can update to the latest Amazon EKS optimized AMI for an existing Kubernetes version.

Important

This topic covers node updates for self-managed nodes. If you are using Managed node groups, see Updating a managed node group.

The latest default Amazon EKS node AWS CloudFormation template is configured to launch an instance with the new AMI into your cluster before removing an old one, one at a time. This configuration ensures that you always have your Auto Scaling group's desired count of active instances in your cluster during the rolling update.

Note

This method is not supported for node groups that were created with eksctl. If you created your cluster or node group with eksctl, see Migrating to a new node group.

To update an existing node group

  1. Determine your cluster's DNS provider.

    kubectl get deployments -l k8s-app=kube-dns -n kube-system

    Output (this cluster is using kube-dns for DNS resolution, but your cluster may return coredns instead):

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE <kube-dns> 1 1 1 1 31m
  2. If your current deployment is running fewer than two replicas, scale out the deployment to two replicas. Substitute coredns for kube-dns if your previous command output returned that instead.

    kubectl scale deployments/<kube-dns> --replicas=2 -n kube-system
  3. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to zero replicas to avoid conflicting scaling actions.

    kubectl scale deployments/cluster-autoscaler --replicas=0 -n kube-system
  4. Determine the instance type and desired instance count of your current node group. You will enter these values later when you update the AWS CloudFormation template for the group.

    1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

    2. Choose Launch Configurations in the left navigation, and note the instance type for your existing node launch configuration.

    3. Choose Auto Scaling Groups in the left navigation and note the Desired instance count for your existing node Auto Scaling group.

  5. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  6. Select your node group stack, and then choose Update.

  7. Select Replace current template and select Amazon S3 URL.

  8. For Amazon S3 URL, paste the URL that corresponds to the Region that your cluster is in into the text area to ensure that you are using the latest version of the node AWS CloudFormation template, and then choose Next:

    • All Regions other than China (Beijing) and China (Ningxia)

      https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-nodegroup.yaml
    • China (Beijing) and China (Ningxia)

      https://s3.cn-north-1.amazonaws.com.cn/amazon-eks/cloudformation/2020-10-29/amazon-eks-nodegroup.yaml
  9. On the Specify stack details page, fill out the following parameters, and choose Next:

    • NodeAutoScalingGroupDesiredCapacity – Enter the desired instance count that you recorded in a previous step, or enter a new desired number of nodes to scale to when your stack is updated.

    • NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes to which your node Auto Scaling group can scale out. This value must be at least one node greater than your desired capacity so that you can perform a rolling update of your nodes without reducing your node count during the update.

    • NodeInstanceType – Choose the instance type your recorded in a previous step, or choose a different instance type for your nodes. Each Amazon EC2 instance type supports a maximum number of elastic network interfaces (ENIs) and each ENI supports a maximum number of IP addresses. Since each worker node and pod is assigned its own IP address it's important to choose an instance type that will support the maximum number of pods that you want to run on each worker node. For a list of the number of ENIs and IP addresses supported by instance types, see IP addresses per network interface per instance type. For example, the m5.large instance type supports a maximum of 30 IP addresses for the worker node and pods. Some instance types might not be available in all Regions.

      Note

      The supported instance types for the latest version of the Amazon VPC CNI plugin for Kubernetes are shown here. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI plugin for Kubernetes upgrades.

      Important

      Some instance types might not be available in all Regions.

    • NodeImageIdSSMParam – The Amazon EC2 Systems Manager parameter of the AMI ID that you want to update to. The following value uses the latest Amazon EKS optimized AMI for Kubernetes version 1.18.

      /aws/service/eks/optimized-ami/<1.18>/<amazon-linux-2>/recommended/image_id

      You can replace <1.18> (including <>) with a supported Kubernetes version that is the same, or up to one version earlier than the Kubernetes version running on your control plane. We recommend that you keep your nodes at the same version as your control plane. If you want to use the Amazon EKS optimized accelerated AMI, then replace <amazon-linux-2> with <amazon-linux-2-gpu>.

      Note

      Using the Amazon EC2 Systems Manager parameter enables you to update your nodes in the future without having to lookup and specify an AMI ID. If your AWS CloudFormation stack is using this value, any stack update will always launch the latest recommended Amazon EKS optimized AMI for your specified Kubernetes version, even if you don't change any values in the template.

    • NodeImageId – To use your own custom AMI, enter the ID for the AMI to use.

      Important

      This value overrides any value specified for NodeImageIdSSMParam. If you want to use the NodeImageIdSSMParam value, ensure that the value for NodeImageId is blank.

    • DisableIMDSv1 – Each node supports the Instance Metadata Service Version 1 (IMDSv1) and IMDSv2 by default, but you can disable IMDSv1. Select true if you don't want any nodes in the node group, or any pods scheduled on the nodes in the node group to use IMDSv1. For more information about IMDS, see Configuring the instance metadata service. If you’ve implemented IAM roles for service accounts, have assigned necessary permissions directly to all pods that require access to AWS services, and no pods in your cluster require access to IMDS for other reasons, such as retrieving the current Region, then you can also disable access to IMDSv2 for pods that don't use host networking. For more information, see Restricting access to the IMDS and Amazon EC2 instance profile credentials.

  10. (Optional) On the Options page, tag your stack resources. Choose Next.

  11. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Update stack.

    Note

    The update of each node in the cluster takes several minutes. Wait for the update of all nodes to complete before performing the next steps.

  12. If your cluster's DNS provider is kube-dns, scale in the kube-dns deployment to one replica.

    kubectl scale deployments/kube-dns --replicas=1 -n kube-system
  13. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment back to your desired amount of replicas.

    kubectl scale deployments/cluster-autoscaler --replicas=<1> -n kube-system
  14. (Optional) Verify that you are using the latest version of the Amazon VPC CNI plugin for Kubernetes. You may need to update your CNI version to take advantage of the latest supported instance types. For more information, see Amazon VPC CNI plugin for Kubernetes upgrades.