ARM support - Amazon EKS

ARM support

You can create an Amazon EKS cluster and add nodes running AWS Graviton-based instances to the cluster. These instances deliver significant cost savings for scale-out and ARM-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores.


These instructions and the assets that they reference are offered as a beta feature that is administered by AWS. Use of these instructions and assets is governed as a beta under the AWS service terms. While in beta, Amazon EKS does not support using AWS Graviton-based instances for production Kubernetes workloads. Submit comments or questions in a GitHub issue.


  • Nodes can use any AWS Graviton-based instance type, such as a1.xlarge or m6g.2xlarge. However, all nodes in a node group must use the same instance type.

  • Nodes must be deployed with Kubernetes version 1.15 or 1.14.

  • To use AWS Graviton-based instance nodes, you must set up a new Amazon EKS cluster. You cannot add these nodes to a cluster that has existing x86 nodes.


Create a cluster

  1. Run the following command to create an Amazon EKS cluster with no nodes. If you want to create a cluster running Kubernetes version 1.14, then replace 1.15 with the version that you want. You can replace region-code with any Region that Amazon EKS is available in.

    eksctl create cluster \ --name a1-preview \ --version 1.15 \ --region region-code \ --without-nodegroup

    Launching an Amazon EKS cluster using eksctl creates an AWS CloudFormation stack. The launch process for this stack typically takes 10 to 15 minutes. You can monitor the progress in the Amazon EKS console.

  2. When the cluster creation completes, open the AWS CloudFormation console. You will see a stack named eksctl-a1-preview-cluster. Select this stack. Select the Resources tab. Record the values of the IDs for the ControlPlaneSecurityGroup and VPC resources.

  3. Confirm that the cluster is running with the kubectl get svc command. The command returns output similar to the following example output.

    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP       <none>        443/TCP   20m

Enable ARM support

To support having only ARM nodes in an Amazon EKS cluster, you need to update some of the Kubernetes components. Complete the following steps to update CoreDNS and kube-proxy, and install the Amazon VPC ARM64 CNI Plugin for Kubernetes.

  1. Update the CoreDNS image ID using the command that corresponds to the version of the cluster that you installed in a previous step. You can replace 1.15 with 1.14.

    kubectl apply -f
  2. Update the kube-proxy image ID using the command that corresponds to the version of the cluster that you installed in a previous step.You can replace 1.15 with 1.14.

    kubectl apply -f
  3. Deploy the Amazon VPC ARM64 CNI Plugin for Kubernetes.

    kubectl apply -f

Launch nodes


Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see Amazon EC2 pricing.

  1. Open the AWS CloudFormation console. Ensure that you are in the AWS Region that you created your Amazon EKS cluster in.

  2. Choose Create stack, and then choose With new resources (standard).

  3. For Specify template, select Amazon S3 URL, enter the following URL into the Amazon S3 URL box, and then choose Next twice.
  4. On the Specify stack details page, fill out the following parameters accordingly:

    • Stack name – Choose a stack name for your AWS CloudFormation stack. For example, you can name it a1-preview-nodes.

    • KubernetesVersion – Select the version of Kubernetes that you chose when launching your Amazon EKS cluster.

    • ClusterName – Enter the name that you used when you created your Amazon EKS cluster.


      This name must exactly match the name you used in Step 1: Create your Amazon EKS cluster; otherwise, your nodes cannot join the cluster.

    • ClusterControlPlaneSecurityGroup – Choose the ControlPlaneSecurityGroup ID value from the AWS CloudFormation output that you generated with Create a cluster.

    • NodeGroupName – Enter a name for your node group. This name can be used later to identify the Auto Scaling group that is created for your nodes.

    • NodeAutoScalingGroupMinSize – Enter the minimum number of nodes that Auto Scaling group can scale in to.

    • NodeAutoScalingGroupDesiredCapacity – Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes that your node Auto Scaling group can scale out to.

    • NodeInstanceType – Choose one of the A1 or M6g instance types for your nodes, such as a1.large.

    • NodeVolumeSize – Specify a root volume size for your nodes, in GiB.

    • KeyName – Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don't already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see Amazon EC2 key pairs in the Amazon EC2 User Guide for Linux Instances.


      If you do not provide a key pair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments – Arguments to pass to the bootstrap script. For details, see

    • VpcId – Enter the ID for the VPC that you created in Create a cluster.

    • Subnets – Choose the subnets that you created in Create a cluster.


      If any of the subnets are public subnets, then they must have the automatic public IP address assignment setting enabled. If the setting is not enabled for the public subnet, then any nodes that you deploy to that public subnet will not be assigned a public IP address and will not be able to communicate with the cluster or other AWS services. If the subnet was deployed before 03/26/2020 using either of the Amazon EKS AWS CloudFormation VPC templates, or by using eksctl, then automatic public IP address assignment is disabled for public subnets. For information about how to enable public IP address assignment for a subnet, see Modifying the Public IPv4 Addressing Attribute for Your Subnet. If the node is deployed to a private subnet, then it is able to communicate with the cluster and other AWS services through a NAT gateway.

    • NodeImageAMI11x – The Amazon EC2 Systems Manager parameter for the AMI image ID. You should not make any changes to these parameters.

  5. Choose Next and then choose Next again.

  6. Acknowledge that the stack might create IAM resources, and then choose Create stack.


    If nodes fail to join the cluster, see Nodes fail to join cluster in the Troubleshooting guide.

  7. When your stack has finished creating, select it in the console and choose Outputs.

  8. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS nodes.

Join nodes to a cluster

  1. Download, edit, and apply the AWS IAM Authenticator configuration map.

    1. Use the following command to download the configuration map:

    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.


      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command may take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      If you receive any authorization or resource type errors, see Unauthorized or access denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch

(Optional) Deploy an application

To confirm that you can deploy and run an application on the nodes, complete the following steps.

  1. Deploy the CNI metrics helper with the following command.

    kubectl apply -f

    The output returned is similar to the following example output. created
    serviceaccount/cni-metrics-helper created created
    deployment.extensions/cni-metrics-helper created
  2. Confirm that the CNI metrics helper is running with the following command.

    kubectl -n kube-system get pods -o wide

    The pod is running if you see the cni-metrics-helper pod returned in the output.