Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on - Amazon EKS

Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on

Important

This topic will be removed from this guide on July 1, 2023. We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you're not familiar with the difference between the types, see Amazon EKS add-ons. For more information about adding an Amazon EKS add-on, to your cluster, see Creating an add-on.

The Amazon VPC CNI plugin for Kubernetes add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. The add-on creates elastic network interfaces (network interfaces) and attaches them to your Amazon EC2 nodes. The add-on also assigns a private IPv4 or IPv6 address from your VPC to each pod and service. Your pods and services have the same IP address inside the pod as they do on the VPC network.

A version of the add-on is deployed with each Fargate node in your cluster, but you don't update it on Fargate nodes. For more information about the version of the add-on deployed to Amazon EC2 nodes, see amazon-vpc-cni-k8s and Proposal: CNI plugin for Kubernetes networking over Amazon VPC on GitHub. Several of the configuration variables for the plugin are expanded on in Choosing pod networking use cases. Other compatible CNI plugins are available for use on Amazon EKS clusters, but this is the only CNI plugin supported by Amazon EKS.

Prerequisites
Considerations
  • Versions are specified as major-version.minor-version.patch-version-eksbuild.build-number.

  • We recommend that you only update one minor version at a time. For example, if your current minor version is 1.10 and you want to update to 1.12, then you should update to 1.11 first, then update to 1.12.

  • All versions work with all Amazon EKS supported Kubernetes versions, though not all features of each release work with all Kubernetes versions. When using different Amazon EKS features, if a specific version of the add-on is required, then it's noted in the feature documentation. Unless you have a specific reason for running an earlier version, we recommend running the latest version.

To update the Amazon VPC CNI plugin for Kubernetes self-managed add-on
  1. Confirm that you have the self-managed type of the add-on installed on your cluster. Replace my-cluster with the name of your cluster.

    aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text

    If an error message is returned, you have the self-managed type of the add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of the add-on. If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. To update it, use the procedure in Updating an add-on, rather than using the procedure in this topic. If you're not familiar with the differences between the add-on types, see Amazon EKS add-ons.

  2. See which version of the container image is currently installed on your cluster.

    kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
  3. Backup your current settings so you can configure the same settings once you've updated your version.

    kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml
  4. To see the available versions and familiarize yourself with the changes in the version that you want to update to, see releases on GitHub.

  5. If you don't have any custom settings, then run the command under To apply this release: heading on GitHub for the release that you want to update to.

    If you have custom settings, download the manifest file with the following command, instead of applying it. Change url-of-manifest-from-github to the URL for the release on GitHub that you're installing.

    curl -O url-of-manifest-from-github/aws-k8s-cni.yaml

    If necessary, modify the file with the custom settings from the backup you made and then apply the modified file to your cluster. If your nodes don't have access to the private Amazon EKS Amazon ECR repositories that the images are pulled from (see the lines that start with image: in the manifest), then you'll have to download the images, copy them to your own repository, and modify the manifest to pull the images from your repository. For more information, see Copy a container image from one repository to another repository.

    kubectl apply -f aws-k8s-cni.yaml
  6. Confirm that the new version is now installed on your cluster.

    kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3