Amazon EKS
User Guide

Getting Started with the AWS Management Console

This getting started guide helps you to create all of the required resources to get started with Amazon EKS in the AWS Management Console. In this guide, you manually create each resource in the Amazon EKS or AWS CloudFormation consoles, and the workflow described here gives you complete visibility into how each resource is created and how they interact with each other.

For a simpler and more automated getting started experience, see Getting Started with eksctl.

Amazon EKS Prerequisites

Before you can create an Amazon EKS cluster, you must create an IAM role that Kubernetes can assume to create AWS resources. For example, when a load balancer is created, Kubernetes assumes the role to create an Elastic Load Balancing load balancer in your account. This only needs to be done one time and can be used for multiple EKS clusters.

You must also create a VPC and a security group for your cluster to use. Although the VPC and security groups can be used for multiple EKS clusters, we recommend that you use a separate VPC for each EKS cluster to provide better network isolation.

This section also helps you to install the kubectl binary and configure it to work with Amazon EKS.

Create your Amazon EKS Service Role

To create your Amazon EKS service role in the IAM console

  1. Open the IAM console at https://console.aws.amazon.com/iam/.

  2. Choose Roles, then Create role.

  3. Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.

  4. Choose Next: Tags.

  5. (Optional) Add metadata to the role by attaching tags as key–value pairs. For more information about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.

  6. Choose Next: Review.

  7. For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.

Create your Amazon EKS Cluster VPC

This section guides you through creating a VPC for your cluster with either 3 public subnets, or two public subnets and two private subnets, which are provided with internet access through a NAT gateway. We recommend a network architecture that uses private subnets for your worker nodes, and public subnets for Kubernetes to create public load balancers within.

Choose the tab below that represents your desired VPC configuration.

Only public subnetsPublic and private subnets
Only public subnets

To create your cluster VPC with only public subnets

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  2. From the navigation bar, select a Region that supports Amazon EKS.

  3. Choose Create stack.

  4. For Choose a template, select Specify an Amazon S3 template URL.

  5. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-vpc-sample.yaml
  6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.

    • VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.

    • Subnet01Block: Specify a CIDR range for subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • Subnet02Block: Specify a CIDR range for subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • Subnet03Block: Specify a CIDR range for subnet 3. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

  7. (Optional) On the Options page, tag your stack resources. Choose Next.

  8. On the Review page, choose Create.

  9. When your stack is created, select it in the console and choose Outputs.

  10. Record the SecurityGroups value for the security group that was created. You need this when you create your EKS cluster; this security group is applied to the cross-account elastic network interfaces that are created in your subnets that allow the Amazon EKS control plane to communicate with your worker nodes.

  11. Record the VpcId for the VPC that was created. You need this when you launch your worker node group template.

  12. Record the SubnetIds for the subnets that were created. You need this when you create your EKS cluster; these are the subnets that your worker nodes are launched into.

Public and private subnets

To create your cluster VPC with public and private subnets

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  2. From the navigation bar, select a Region that supports Amazon EKS.

  3. Choose Create stack.

  4. For Choose a template, select Specify an Amazon S3 template URL.

  5. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-vpc-private-subnets.yaml
  6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.

    • VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.

    • PublicSubnet01Block: Specify a CIDR range for public subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PublicSubnet02Block: Specify a CIDR range for public subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PrivateSubnet01Block: Specify a CIDR range for private subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PrivateSubnet02Block: Specify a CIDR range for private subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

  7. (Optional) On the Options page, tag your stack resources. Choose Next.

  8. On the Review page, choose Create.

  9. When your stack is created, select it in the console and choose Outputs.

  10. Record the SecurityGroups value for the security group that was created. You need this when you create your EKS cluster; this security group is applied to the cross-account elastic network interfaces that are created in your subnets that allow the Amazon EKS control plane to communicate with your worker nodes.

  11. Record the VpcId for the VPC that was created. You need this when you launch your worker node group template.

  12. Record the SubnetIds for the subnets that were created. You need this when you create your EKS cluster; these are the subnets that your worker nodes are launched into.

  13. Tag your private subnets so that Kubernetes knows that it can use them for internal load balancers.

    1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

    2. Choose Subnets in the left navigation.

    3. Select one of the private subnets for your Amazon EKS cluster's VPC (you can filter them with the string PrivateSubnet), and choose the Tags tab, and then Add/Edit Tags.

    4. Choose Create Tag and add the following key and value, and then choose Save.

      Key Value

      kubernetes.io/role/internal-elb

      1

    5. Repeat these substeps for each private subnet in your VPC.

Install and Configure kubectl for Amazon EKS

Kubernetes uses a command-line utility called kubectl for communicating with the cluster API server.

To install kubectl for Amazon EKS

  • You have multiple options to download and install kubectl for your operating system.

    • The kubectl binary is available in many operating system package managers, and this option is often much easier than a manual download and install process. You can follow the instructions for your specific operating system or package manager in the Kubernetes documentation to install.

    • Amazon EKS also vends kubectl binaries that you can use that are identical to the upstream kubectl binaries with the same version. To install the Amazon EKS-vended binary for your operating system, see Installing kubectl.

Install the Latest AWS CLI

To use kubectl with your Amazon EKS clusters, you must install a binary that can create the required client security token for cluster API server communication. The aws eks get-token command, available in version 1.16.156 or greater of the AWS CLI, supports client security token creation. To install or upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

Important

Package managers such yum, apt-get, or Homebrew for macOS are often behind several versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

You can check your AWS CLI version with the following command:

aws --version

Note

Your system's Python version must be 2.7.9 or greater. Otherwise, you receive hostname doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are "hostname doesn't match" errors? in the Python Requests FAQ.

If you are unable to install version 1.16.156 or greater of the AWS CLI on your system, you must ensure that the AWS IAM Authenticator for Kubernetes is installed on your system. For more information, see Installing aws-iam-authenticator.

Step 1: Create Your Amazon EKS Cluster

Now you can create your Amazon EKS cluster.

Important

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more information, see Managing Users or IAM Roles for your Cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

If you install and configure the AWS CLI, you can configure the IAM credentials for your user. If the AWS CLI is configured properly for your user, then eksctl and the AWS IAM Authenticator for Kubernetes can find those credentials as well. For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.

To create your cluster with the console

  1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.

  2. Choose Create cluster.

    Note

    If your IAM user does not have administrative privileges, you must explicitly add permissions for that user to call the Amazon EKS API operations. For more information, see Amazon EKS Identity-Based Policy Examples.

  3. On the Create cluster page, fill in the following fields and then choose Create:

    • Cluster name: A unique name for your cluster.

    • Kubernetes version: The version of Kubernetes to use for your cluster. By default, the latest available version is selected.

    • Role ARN: Select the IAM role that you created with Create your Amazon EKS Service Role.

    • VPC: The VPC you created with Create your Amazon EKS Cluster VPC. You can find the name of your VPC in the drop-down list.

    • Subnets: The SubnetIds values (comma-separated) from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. Specify all subnets that will host resources for your cluster (such as private subnets for worker nodes and public subnets for load balancers). By default, the available subnets in the VPC specified in the previous field are preselected.

    • Security Groups: The SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. This security group has ControlPlaneSecurityGroup in the drop-down name.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • Endpoint private access: Choose whether to enable or disable private access for your cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API requests that originate from within your cluster's VPC will use the private VPC endpoint. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Endpoint public access: Choose whether to enable or disable public access for your cluster's Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server can only receive requests from within the cluster VPC. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Logging – For each individual log type, choose whether the log type should be Enabled or Disabled. By default, each log type is Disabled. For more information, see Amazon EKS Control Plane Logging

    Note

    You might receive an error that one of the Availability Zones in your request doesn't have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see Insufficient Capacity.

  4. On the Clusters page, choose the name of your newly created cluster to view the cluster information.

  5. The Status field shows CREATING until the cluster provisioning process completes. Cluster provisioning usually takes between 10 and 15 minutes.

Step 2: Create a kubeconfig File

In this section, you create a kubeconfig file for your cluster with the AWS CLI update-kubeconfig command. If you do not want to install the AWS CLI, or if you would prefer to create or update your kubeconfig manually, see Create a kubeconfig for Amazon EKS.

To create your kubeconfig file with the AWS CLI

  1. Ensure that you have at least version 1.16.156 of the AWS CLI installed. To install or upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

    Note

    Your system's Python version must be 2.7.9 or greater. Otherwise, you receive hostname doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are "hostname doesn't match" errors? in the Python Requests FAQ.

    You can check your AWS CLI version with the following command:

    aws --version

    Important

    Package managers such yum, apt-get, or Homebrew for macOS are often behind several versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

  2. Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.

    • By default, the resulting configuration file is created at the default kubeconfig path (.kube/config) in your home directory or merged with an existing kubeconfig at that location. You can specify another path with the --kubeconfig option.

    • You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command.

    • For more information, see the help page with the aws eks update-kubeconfig help command or see update-kubeconfig in the AWS CLI Command Reference.

    aws eks --region region update-kubeconfig --name cluster_name
  3. Test your configuration.

    kubectl get svc

    Note

    If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

    If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

    Output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m

Step 3: Launch and Configure Amazon EKS Worker Nodes

Now that your VPC and Kubernetes control plane are created, you can launch and configure your worker nodes.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see Amazon EC2 Pricing.

To launch your worker nodes

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  3. From the navigation bar, select a Region that supports Amazon EKS.

  4. Choose Create stack.

  5. For Choose a template, select Specify an Amazon S3 template URL.

  6. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml

    Note

    If you intend to only deploy worker nodes to private subnets, you should edit this template in the AWS CloudFormation designer and modify the AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.

    AssociatePublicIpAddress: 'false'
  7. On the Specify Details page, fill out the following parameters accordingly, and choose Next.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName: Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match the name you used in Step 1: Create Your Amazon EKS Cluster; otherwise, your worker nodes cannot join the cluster.

    • ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC.

    • NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node Auto Scaling group can scale in to.

    • NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker node Auto Scaling group can scale out to.

    • NodeInstanceType: Choose an instance type for your worker nodes.

      Important

      Some instance types might not be available in all regions.

    • NodeImageId: Enter the current Amazon EKS worker node AMI ID for your Region. The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support) are shown in the following table.

      Note

      The Amazon EKS-optimized AMI with GPU support only supports P2 and P3 instance types. Be sure to specify these instance types in your worker node AWS CloudFormation template. By using the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license agreement (EULA).

      Kubernetes version 1.13.8Kubernetes version 1.12.10Kubernetes version 1.11.10
      Kubernetes version 1.13.8
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-027683840ad78d833 ami-0af8403c143fd4a07
      US East (N. Virginia) (us-east-1) ami-0d3998d69ebe9b214 ami-0484012ada3522476
      US West (Oregon) (us-west-2) ami-00b95829322267382 ami-0d24da600cc96ae6b
      Asia Pacific (Hong Kong) (ap-east-1) ami-03f8634a8fd592414 ami-080eb165234752969
      Asia Pacific (Mumbai) (ap-south-1) ami-0062e5b0411e77c1a ami-010dbb7183ab64b39
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0a67c71d2ab43d36f ami-069303796840f8155
      Asia Pacific (Seoul) (ap-northeast-2) ami-0d66d2fefbc86831a ami-04f71dc710ff5baf4
      Asia Pacific (Singapore) (ap-southeast-1) ami-06206d907abb34bbc ami-0213fc532b1c2e05f
      Asia Pacific (Sydney) (ap-southeast-2) ami-09f2d86f2d8c4f77d ami-01fc0a4c67f82532b
      EU (Frankfurt) (eu-central-1) ami-038bd8d3a2345061f ami-07b7cbb235789cc31
      EU (Ireland) (eu-west-1) ami-0199284372364b02a ami-00bfeece5b673b69f
      EU (London) (eu-west-2) ami-0f454b09349248e29 ami-0babebc79dbf6016c
      EU (Paris) (eu-west-3) ami-00b44348ab3eb2c9f ami-03136b5b83c5b61ba
      EU (Stockholm) (eu-north-1) ami-02218be9004537a65 ami-057821acea15c1a98
      Kubernetes version 1.12.10
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0ebb1c51e5fe9c376 ami-0b42bfc7af8bb3abc
      US East (N. Virginia) (us-east-1) ami-01e370f796735b244 ami-0eb0119f55d589a03
      US West (Oregon) (us-west-2) ami-0b520e822d42998c1 ami-0c9156d7fcd3c2948
      Asia Pacific (Hong Kong) (ap-east-1) ami-0aa07b9e8bfcdaaff ami-0a5e7de0e5d22a988
      Asia Pacific (Mumbai) (ap-south-1) ami-03b7b0e3088a72394 ami-0c1bc87ff613a979b
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0f554256ac7b33081 ami-0e2f87975f5aa9908
      Asia Pacific (Seoul) (ap-northeast-2) ami-066a40f5f0e0b90f4 ami-08101c357b41e9f9a
      Asia Pacific (Singapore) (ap-southeast-1) ami-06a42a7479836d402 ami-0420c66a82472f4b2
      Asia Pacific (Sydney) (ap-southeast-2) ami-0f93997f60ca40d26 ami-04a085528a6af6499
      EU (Frankfurt) (eu-central-1) ami-04341c15c2f941589 ami-09c45f4e40a56254b
      EU (Ireland) (eu-west-1) ami-018b4a3f81f517183 ami-04668c090ff8c1f50
      EU (London) (eu-west-2) ami-0fd0b45d54f80a0e9 ami-0b925567bd252e74c
      EU (Paris) (eu-west-3) ami-0b12420c7f7281432 ami-0f975ac243bcd0da0
      EU (Stockholm) (eu-north-1) ami-01c1b0b8dcbd02b11 ami-093da2874a5426ce3
      Kubernetes version 1.11.10
      Region Amazon EKS-optimized AMI with GPU support
      US East (Ohio) (us-east-2) ami-0e565ff1ccb9b6979 ami-0f9e62727a55f68d3
      US East (N. Virginia) (us-east-1) ami-08571c6cee1adbb62 ami-0c3d92683a7946ac3
      US West (Oregon) (us-west-2) ami-0566833f0c8e9031e ami-058b22acd515ec20b
      Asia Pacific (Hong Kong) (ap-east-1) ami-0e2e431905d176277 ami-0baf9ac8446e87fb5
      Asia Pacific (Mumbai) (ap-south-1) ami-073c3d075aeb53d1f ami-0c709282458d1114c
      Asia Pacific (Tokyo) (ap-northeast-1) ami-0644b094efc34d888 ami-023f507ec007de487
      Asia Pacific (Seoul) (ap-northeast-2) ami-0ab0067299faa5229 ami-0ccbbe6530310b01d
      Asia Pacific (Singapore) (ap-southeast-1) ami-087f58c635bb8283b ami-0341435cf966cb837
      Asia Pacific (Sydney) (ap-southeast-2) ami-06caef7a88fd74af2 ami-0987b07bd338f97db
      EU (Frankfurt) (eu-central-1) ami-099b3f8db68693895 ami-060f13bd7397f782d
      EU (Ireland) (eu-west-1) ami-06b60c5852910e7b5 ami-0d84963dfda5af073
      EU (London) (eu-west-2) ami-0b56c1f39e4b1eb8e ami-0189e53a00d37a0b6
      EU (Paris) (eu-west-3) ami-036237d1951bfeabc ami-0baea83f5f5d2abfe
      EU (Stockholm) (eu-north-1) ami-0612e10dfe00c5ff6 ami-0d5b7823e58094232

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId: Enter the ID for the VPC that you created in Create your Amazon EKS Cluster VPC.

    • Subnets: Choose the subnets that you created in Create your Amazon EKS Cluster VPC. If you created your VPC using the steps described at Creating a VPC for Your Amazon EKS Cluster, then specify only the private subnets within the VPC for your worker nodes to launch into.

  8. On the Options page, you can choose to tag your stack resources. Choose Next.

  9. On the Review page, review your information, acknowledge that the stack might create IAM resources, and then choose Create.

  10. When your stack has finished creating, select it in the console and choose the Outputs tab.

  11. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS authenticator configuration map:

    1. Download the configuration map with the following command:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command might take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a P2 or P3 instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml

Next Steps

Now that you have a working Amazon EKS cluster with worker nodes, you are ready to start installing Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help you to extend the functionality of your cluster.