Amazon EKS optimized Amazon Linux AMIs
The Amazon EKS optimized Amazon Linux AMI is built on top of Amazon Linux 2, and is configured to serve as the base image for Amazon EKS nodes. The AMI is configured to work with Amazon EKS and it includes the following components:
-
kubelet
-
AWS IAM Authenticator
-
Docker (Amazon EKS version
1.23
and earlier) -
containerd
-
You can track security or privacy events for Amazon Linux 2 at the Amazon Linux security center
or subscribe to the associated RSS feed . Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue. -
Before deploying an accelerated or Arm AMI, review the information in Amazon EKS optimized accelerated Amazon Linux AMIs and Amazon EKS optimized Arm Amazon Linux AMIs.
-
For Kubernetes version 1.23 or earlier, you can use an optional bootstrap flag to enable the
containerd
runtime for Amazon EKS optimized Amazon Linux 2 AMIs. This feature provides a clear path to migrate tocontainerd
when updating to version1.24
or later. Amazon EKS ended support for Docker starting with the Kubernetes version1.24
launch. Thecontainerd
runtime is widely adopted in the Kubernetes community and is a graduated project with the CNCF. You can test it by adding a node group to a new or existing cluster. For more information, see Enable the containerd runtime bootstrap flag. -
When bootstrapped in Amazon EKS optimized accelerated Amazon Linux AMIs for version
1.21
, AWS Inferentiaworkloads aren't supported.
In the following tables, choose View AMI ID for the Kubernetes version, AWS Region, and processor type that are specific to your Amazon Linux instance. You can also retrieve the IDs with an AWS Systems Manager parameter. For more information, see Retrieving Amazon EKS optimized Amazon Linux AMI IDs.
These AMIs require the latest AWS CloudFormation node template. Make sure that you update any existing AWS CloudFormation node stacks with the latest template before you attempt to use these AMIs.
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2022-12-23/amazon-eks-nodegroup.yaml
The AWS CloudFormation node template launches your nodes with Amazon EC2 user data that triggers
a specialized bootstrap script
Enable the
containerd
runtime bootstrap
flag
The Amazon EKS optimized Amazon Linux 2 AMI contains an optional bootstrap flag to enable
the containerd
runtime. This feature provides a clear path to
migrate to containerd
. Amazon EKS ended support for
Docker starting with the Kubernetes version
1.24
launch. For more information, see Amazon EKS ended support for
Dockershim.
You can enable the boostrap flag by creating one of the following types of node groups.
-
Self-managed – Create the node group using the instructions in Launching self-managed Amazon Linux nodes. Specify an Amazon EKS optimized AMI and the following text for the
BootstrapArguments
parameter.--container-runtime containerd
-
Managed – If you use
eksctl
, create a file named
with the following contents. Replace everymy-nodegroup
.yaml
with your own values. The node group name can't be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. To retrieve your desired value forexample value
ami-
, you can use the previous AMI tables.1234567890abcdef0
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name:
my-cluster
region:region-code
managedNodeGroups: - name:my-nodegroup
ami: ami-1234567890abcdef0
overrideBootstrapCommand: | #!/bin/bash /etc/eks/bootstrap.shmy-cluster
--container-runtime containerdNote If you launch many nodes simultaneously, you may also want to specify values for the
--apiserver-endpoint
,--b64-cluster-ca
, and--dns-cluster-ip
bootstrap arguments to avoid errors. For more information, see Specifying an AMI.Run the following command to create the node group.
eksctl create nodegroup -f
my-nodegroup
.yaml --version1.23
If you prefer to use a different tool to create your managed node group, you must deploy the node group using a launch template. In your launch template, specify an Amazon EKS optimized AMI ID, then deploy the node group using a launch template and provide the following user data. This user data passes arguments into the
bootstrap.sh
file. For more information about the bootstrap file, see bootstrap.shon GitHub. /etc/eks/bootstrap.sh
my-cluster
\ --container-runtime containerd
Amazon EKS optimized accelerated Amazon Linux AMIs
The Amazon EKS optimized accelerated Amazon Linux AMI is built on top of the
standard Amazon EKS optimized Amazon Linux AMI. It's configured to serve as an
optional image for Amazon EKS nodes to support GPU and Inferentia
In addition to the standard Amazon EKS optimized AMI configuration, the accelerated AMI includes the following:
-
NVIDIA drivers
-
The
nvidia-container-runtime
(as the default runtime) -
AWS Neuron container runtime
-
The Amazon EKS optimized accelerated AMI only supports GPU and Inferentia based instance types. Make sure to specify these instance types in your node AWS CloudFormation template. By using the Amazon EKS optimized accelerated AMI, you agree to NVIDIA's user license agreement (EULA)
. -
The Amazon EKS optimized accelerated AMI was previously referred to as the Amazon EKS optimized AMI with GPU support.
-
Previous versions of the Amazon EKS optimized accelerated AMI installed the
nvidia-docker
repository. The repository is no longer included in Amazon EKS AMI versionv20200529
and later.
To enable GPU based workloads
The following procedure describes how to run a workload on a GPU based instance with the Amazon EKS optimized accelerated AMI. For more information about using Inferentia based workloads, see Machine learning inference using AWS Inferentia.
-
After your GPU nodes join your cluster, you must apply the NVIDIA device plugin for Kubernetes
as a DaemonSet on your cluster with the following command. kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.9.0/nvidia-device-plugin.yml
-
You can verify that your nodes have allocatable GPUs with the following command.
kubectl get nodes "-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"
To deploy a pod to test that your GPU nodes are configured properly
-
Create a file named
nvidia-smi.yaml
with the following contents. This manifest launches a Cuda container that runsnvidia-smi
on a node.apiVersion: v1 kind: Pod metadata: name: nvidia-smi spec: restartPolicy: OnFailure containers: - name: nvidia-smi image: nvidia/cuda:9.2-devel args: - "nvidia-smi" resources: limits: nvidia.com/gpu: 1
-
Apply the manifest with the following command.
kubectl apply -f nvidia-smi.yaml
-
After the pod has finished running, view its logs with the following command.
kubectl logs nvidia-smi
The output as follows.
Mon Aug 6 20:23:31 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 396.26 Driver Version: 396.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 | | N/A 46C P0 47W / 300W | 0MiB / 16160MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
Amazon EKS optimized Arm Amazon Linux AMIs
Arm instances deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores. When adding Arm nodes to your cluster, review the following considerations.
Considerations
-
If your cluster was deployed before August 17, 2020, you must do a one-time upgrade of critical cluster add-on manifests. This is so that Kubernetes can pull the correct image for each hardware architecture in use in your cluster. For more information about updating cluster add-ons, see Update the Kubernetes version for your Amazon EKS cluster . If you deployed your cluster on or after August 17, 2020, then your CoreDNS,
kube-proxy
, and Amazon VPC CNI plugin for Kubernetes add-ons are already multi-architecture capable. -
Applications deployed to Arm nodes must be compiled for Arm.
-
You can't use the Amazon FSx for Lustre CSI driver with Arm.
-
If you have DaemonSets that are deployed in an existing cluster, or you want to deploy them to a new cluster that you also want to deploy Arm nodes in, then verify that your DaemonSet can run on all hardware architectures in your cluster.
-
You can run Arm node groups and x86 node groups in the same cluster. If you do, consider deploying multi-architecture container images to a container repository such as Amazon Elastic Container Registry and then adding node selectors to your manifests so that Kubernetes knows what hardware architecture a pod can be deployed to. For more information, see Pushing a multi-architecture image in the Amazon ECR User Guide and the Introducing multi-architecture container images for Amazon ECR
blog post.