Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on
This topic will be removed from this guide on July 1, 2023. We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you're not familiar with the difference between the types, see Amazon EKS add-ons. For more information about adding an Amazon EKS add-on, to your cluster, see Creating an add-on.
The Amazon VPC CNI plugin for Kubernetes add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. The
add-on creates elastic network interfaces (network interfaces) and attaches them to your
Amazon EC2 nodes. The add-on also assigns a private IPv4
or
IPv6
address from your VPC to each pod and
service. Your pods and services have the same IP address inside the
pod as they do on the VPC network.
A version of the add-on is deployed with each Fargate node in your cluster, but you
don't update it on Fargate nodes. For more information about the version of the add-on
deployed to Amazon EC2 nodes, see amazon-vpc-cni-k8s
Prerequisites
-
An existing Amazon EKS cluster. To deploy one, see Getting started with Amazon EKS.
-
If your cluster is
1.21
or later, make sure that yourkube-proxy
and CoreDNS add-ons are at the minimum versions listed in Service account tokens. -
An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have one, or to create one, see Creating an IAM OIDC provider for your cluster.
-
An IAM role with the AmazonEKS_CNI_Policy
IAM policy (if your cluster uses the IPv4
family) or an IPv6 policy (if your cluster uses theIPv6
family) attached to it. For more information, see Configuring the Amazon VPC CNI plugin for Kubernetes to use IAM roles for service accounts. -
If you are using version
1.7.0
or later of the CNI plugin and you use custom pod security policies, see Delete the default Amazon EKS pod security policyPod security policy.
Considerations
-
Versions are specified as
major-version.minor-version.patch-version-eksbuild.build-number
. -
We recommend that you only update one minor version at a time. For example, if your current minor version is
1.10
and you want to update to1.12
, then you should update to1.11
first, then update to1.12
. -
All versions work with all Amazon EKS supported Kubernetes versions, though not all features of each release work with all Kubernetes versions. When using different Amazon EKS features, if a specific version of the add-on is required, then it's noted in the feature documentation. Unless you have a specific reason for running an earlier version, we recommend running the latest version.
To update the Amazon VPC CNI plugin for Kubernetes self-managed add-on
-
Confirm that you have the self-managed type of the add-on installed on your cluster. Replace
my-cluster
with the name of your cluster.aws eks describe-addon --cluster-name
my-cluster
--addon-name vpc-cni --query addon.addonVersion --output textIf an error message is returned, you have the self-managed type of the add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of the add-on. If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. To update it, use the procedure in Updating an add-on, rather than using the procedure in this topic. If you're not familiar with the differences between the add-on types, see Amazon EKS add-ons.
-
See which version of the container image is currently installed on your cluster.
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
-
Backup your current settings so you can configure the same settings once you've updated your version.
kubectl get daemonset aws-node -n kube-system -o yaml >
aws-k8s-cni-old.yaml
-
To see the available versions and familiarize yourself with the changes in the version that you want to update to, see
releases
on GitHub. -
If you don't have any custom settings, then run the command under
To apply this release:
heading on GitHub for the releasethat you want to update to. If you have custom settings, download the manifest file with the following command, instead of applying it. Change
url-of-manifest-from-github
to the URL for the release on GitHub that you're installing.curl -O
url-of-manifest-from-github
/aws-k8s-cni.yamlIf necessary, modify the file with the custom settings from the backup you made and then apply the modified file to your cluster. If your nodes don't have access to the private Amazon EKS Amazon ECR repositories that the images are pulled from (see the lines that start with
image:
in the manifest), then you'll have to download the images, copy them to your own repository, and modify the manifest to pull the images from your repository. For more information, see Copy a container image from one repository to another repository.kubectl apply -f aws-k8s-cni.yaml
-
Confirm that the new version is now installed on your cluster.
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3