Amazon EKS
User Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Getting Started with the AWS Management Console

This getting started guide helps you to create all of the required resources to get started with Amazon EKS in the AWS Management Console. In this guide, you manually create each resource in the Amazon EKS or AWS CloudFormation consoles, and the workflow described here gives you complete visibility into how each resource is created and how they interact with each other.

For a simpler and more automated getting started experience, see Getting Started with eksctl.

Amazon EKS Prerequisites

Before you can create an Amazon EKS cluster, you must create an IAM role that Kubernetes can assume to create AWS resources. For example, when a load balancer is created, Kubernetes assumes the role to create an Elastic Load Balancing load balancer in your account. This only needs to be done one time and can be used for multiple EKS clusters.

You must also create a VPC and a security group for your cluster to use. Although the VPC and security groups can be used for multiple EKS clusters, we recommend that you use a separate VPC for each EKS cluster to provide better network isolation.

This section also helps you to install the kubectl binary and configure it to work with Amazon EKS.

Create your Amazon EKS Service Role

To create your Amazon EKS service role in the IAM console

  1. Open the IAM console at https://console.aws.amazon.com/iam/.

  2. Choose Roles, then Create role.

  3. Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.

  4. Choose Next: Tags.

  5. (Optional) Add metadata to the role by attaching tags as key–value pairs. For more information about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.

  6. Choose Next: Review.

  7. For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.

Create your Amazon EKS Cluster VPC

This section guides you through creating a VPC for your cluster with either 3 public subnets, or two public subnets and two private subnets, which are provided with internet access through a NAT gateway. We recommend a network architecture that uses private subnets for your worker nodes, and public subnets for Kubernetes to create public load balancers within.

Choose the tab below that represents your desired VPC configuration.

Only public subnetsPublic and private subnets
Only public subnets

To create your cluster VPC with only public subnets

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  2. From the navigation bar, select a Region that supports Amazon EKS.

  3. Choose Create stack.

  4. For Choose a template, select Specify an Amazon S3 template URL.

  5. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-10-08/amazon-eks-vpc-sample.yaml
  6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.

    • VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.

    • Subnet01Block: Specify a CIDR range for subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • Subnet02Block: Specify a CIDR range for subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • Subnet03Block: Specify a CIDR range for subnet 3. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

  7. (Optional) On the Options page, tag your stack resources. Choose Next.

  8. On the Review page, choose Create.

  9. When your stack is created, select it in the console and choose Outputs.

  10. Record the SecurityGroups value for the security group that was created. You need this when you create your EKS cluster; this security group is applied to the cross-account elastic network interfaces that are created in your subnets that allow the Amazon EKS control plane to communicate with your worker nodes.

  11. Record the VpcId for the VPC that was created. You need this when you launch your worker node group template.

  12. Record the SubnetIds for the subnets that were created. You need this when you create your EKS cluster; these are the subnets that your worker nodes are launched into.

Public and private subnets

To create your cluster VPC with public and private subnets

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.

  2. From the navigation bar, select a Region that supports Amazon EKS.

  3. Choose Create stack.

  4. For Choose a template, select Specify an Amazon S3 template URL.

  5. Paste the following URL into the text area and choose Next:

    https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-10-08/amazon-eks-vpc-private-subnets.yaml
  6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.

    • VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.

    • PublicSubnet01Block: Specify a CIDR range for public subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PublicSubnet02Block: Specify a CIDR range for public subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PrivateSubnet01Block: Specify a CIDR range for private subnet 1. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

    • PrivateSubnet02Block: Specify a CIDR range for private subnet 2. We recommend that you keep the default value so that you have plenty of IP addresses for pods to use.

  7. (Optional) On the Options page, tag your stack resources. Choose Next.

  8. On the Review page, choose Create.

  9. When your stack is created, select it in the console and choose Outputs.

  10. Record the SecurityGroups value for the security group that was created. You need this when you create your EKS cluster; this security group is applied to the cross-account elastic network interfaces that are created in your subnets that allow the Amazon EKS control plane to communicate with your worker nodes.

  11. Record the VpcId for the VPC that was created. You need this when you launch your worker node group template.

  12. Record the SubnetIds for the subnets that were created. You need this when you create your EKS cluster; these are the subnets that your worker nodes are launched into.

  13. Tag your private subnets so that Kubernetes knows that it can use them for internal load balancers.

    1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

    2. Choose Subnets in the left navigation.

    3. Select one of the private subnets for your Amazon EKS cluster's VPC (you can filter them with the string PrivateSubnet), and choose the Tags tab, and then Add/Edit Tags.

    4. Choose Create Tag and add the following key and value, and then choose Save.

      Key Value

      kubernetes.io/role/internal-elb

      1

    5. Repeat these substeps for each private subnet in your VPC.

Install and Configure kubectl for Amazon EKS

Kubernetes uses a command-line utility called kubectl for communicating with the cluster API server.

To install kubectl for Amazon EKS

  • You have multiple options to download and install kubectl for your operating system.

    • The kubectl binary is available in many operating system package managers, and this option is often much easier than a manual download and install process. You can follow the instructions for your specific operating system or package manager in the Kubernetes documentation to install.

    • Amazon EKS also vends kubectl binaries that you can use that are identical to the upstream kubectl binaries with the same version. To install the Amazon EKS-vended binary for your operating system, see Installing kubectl.

Install the Latest AWS CLI

To use kubectl with your Amazon EKS clusters, you must install a binary that can create the required client security token for cluster API server communication. The aws eks get-token command, available in version 1.16.232 or greater of the AWS CLI, supports client security token creation. To install or upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

Important

Package managers such yum, apt-get, or Homebrew for macOS are often behind several versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

You can check your AWS CLI version with the following command:

aws --version

Note

Your system's Python version must be 2.7.9 or greater. Otherwise, you receive hostname doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are "hostname doesn't match" errors? in the Python Requests FAQ.

If you are unable to install version 1.16.232 or greater of the AWS CLI on your system, you must ensure that the AWS IAM Authenticator for Kubernetes is installed on your system. For more information, see Installing aws-iam-authenticator.

Step 1: Create Your Amazon EKS Cluster

Now you can create your Amazon EKS cluster.

Important

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more information, see Managing Users or IAM Roles for your Cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

If you install and configure the AWS CLI, you can configure the IAM credentials for your user. If the AWS CLI is configured properly for your user, then eksctl and the AWS IAM Authenticator for Kubernetes can find those credentials as well. For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.

To create your cluster with the console

  1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.

  2. Choose Create cluster.

    Note

    If your IAM user does not have administrative privileges, you must explicitly add permissions for that user to call the Amazon EKS API operations. For more information, see Amazon EKS Identity-Based Policy Examples.

  3. On the Create cluster page, fill in the following fields and then choose Create:

    • Cluster name: A unique name for your cluster.

    • Kubernetes version: The version of Kubernetes to use for your cluster. By default, the latest available version is selected.

    • Role ARN: Select the IAM role that you created with Create your Amazon EKS Service Role.

    • VPC: The VPC you created with Create your Amazon EKS Cluster VPC. You can find the name of your VPC in the drop-down list.

    • Subnets: The SubnetIds values (comma-separated) from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. Specify all subnets that will host resources for your cluster (such as private subnets for worker nodes and public subnets for load balancers). By default, the available subnets in the VPC specified in the previous field are preselected.

    • Security Groups: The SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. This security group has ControlPlaneSecurityGroup in the drop-down name.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • Endpoint private access: Choose whether to enable or disable private access for your cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API requests that originate from within your cluster's VPC will use the private VPC endpoint. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Endpoint public access: Choose whether to enable or disable public access for your cluster's Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server can only receive requests from within the cluster VPC. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Logging – For each individual log type, choose whether the log type should be Enabled or Disabled. By default, each log type is Disabled. For more information, see Amazon EKS Control Plane Logging

    • Tags – (Optional) Add any tags to your cluster. For more information, see Tagging Your Amazon EKS Resources.

    Note

    You might receive an error that one of the Availability Zones in your request doesn't have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see Insufficient Capacity.

  4. On the Clusters page, choose the name of your newly created cluster to view the cluster information.

  5. The Status field shows CREATING until the cluster provisioning process completes. Cluster provisioning usually takes between 10 and 15 minutes.

Step 2: Create a kubeconfig File

In this section, you create a kubeconfig file for your cluster with the AWS CLI update-kubeconfig command. If you do not want to install the AWS CLI, or if you would prefer to create or update your kubeconfig manually, see Create a kubeconfig for Amazon EKS.

To create your kubeconfig file with the AWS CLI

  1. Ensure that you have at least version 1.16.232 of the AWS CLI installed. To install or upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

    Note

    Your system's Python version must be 2.7.9 or greater. Otherwise, you receive hostname doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are "hostname doesn't match" errors? in the Python Requests FAQ.

    You can check your AWS CLI version with the following command:

    aws --version

    Important

    Package managers such yum, apt-get, or Homebrew for macOS are often behind several versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

  2. Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.

    • By default, the resulting configuration file is created at the default kubeconfig path (.kube/config) in your home directory or merged with an existing kubeconfig at that location. You can specify another path with the --kubeconfig option.

    • You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command.

    • For more information, see the help page with the aws eks update-kubeconfig help command or see update-kubeconfig in the AWS CLI Command Reference.

    aws eks --region region update-kubeconfig --name cluster_name
  3. Test your configuration.

    kubectl get svc

    Note

    If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

    If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

    Output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m

Step 3: Launch and Configure Amazon EKS Worker Nodes

Now that your VPC and Kubernetes control plane are created, you can launch and configure your worker nodes.

Important

Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see Amazon EC2 Pricing.

To launch your worker nodes

  1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the cluster is active, the worker nodes will fail to register with the cluster and you will have to relaunch them.

  2. Choose the tab below that corresponds to your cluster's Kubernetes version, then choose a Launch workers link that corresponds to your region and AMI type. This opens the AWS CloudFormation console and pre-populates several fields for you.

    Kubernetes version 1.14.7Kubernetes version 1.13.11Kubernetes version 1.12.10Kubernetes version 1.11.10
    Kubernetes version 1.14.7
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    Kubernetes version 1.13.11
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    Kubernetes version 1.12.10
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers
    Kubernetes version 1.11.10
    Region Amazon EKS-optimized AMI with GPU support
    US East (Ohio) (us-east-2) Launch workers Launch workers
    US East (N. Virginia) (us-east-1) Launch workers Launch workers
    US West (Oregon) (us-west-2) Launch workers Launch workers
    Asia Pacific (Hong Kong) (ap-east-1) Launch workers Launch workers
    Asia Pacific (Mumbai) (ap-south-1) Launch workers Launch workers
    Asia Pacific (Tokyo) (ap-northeast-1) Launch workers Launch workers
    Asia Pacific (Seoul) (ap-northeast-2) Launch workers Launch workers
    Asia Pacific (Singapore) (ap-southeast-1) Launch workers Launch workers
    Asia Pacific (Sydney) (ap-southeast-2) Launch workers Launch workers
    EU (Frankfurt) (eu-central-1) Launch workers Launch workers
    EU (Ireland) (eu-west-1) Launch workers Launch workers
    EU (London) (eu-west-2) Launch workers Launch workers
    EU (Paris) (eu-west-3) Launch workers Launch workers
    EU (Stockholm) (eu-north-1) Launch workers Launch workers
    Middle East (Bahrain) (me-south-1) Launch workers Launch workers

    Note

    If you intend to only deploy worker nodes to private subnets, you should edit this template in the AWS CloudFormation designer and modify the AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.

    AssociatePublicIpAddress: 'false'
  3. On the Quick create stack page, fill out the following parameters accordingly.

    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it <cluster-name>-worker-nodes.

    • ClusterName: Enter the name that you used when you created your Amazon EKS cluster.

      Important

      This name must exactly match the name you used in Step 1: Create Your Amazon EKS Cluster; otherwise, your worker nodes cannot join the cluster.

    • ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC.

    • NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that is created for your worker nodes.

    • NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node Auto Scaling group can scale in to.

    • NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when your stack is created.

    • NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker node Auto Scaling group can scale out to.

    • NodeInstanceType: Choose an instance type for your worker nodes.

      Important

      Some instance types might not be available in all regions.

    • NodeImageIdSSMParam: Pre-populated based on the version that you launched your worker nodes with in step 2. This value is the Amazon EC2 Systems Manager Parameter Store parameter to use for your worker node AMI ID. For example, the /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended/image_id parameter is for the latest recommended Kubernetes version 1.14 Amazon EKS-optimized AMI.

      Note

      The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.

    • NodeImageId: (Optional) If you are using your own custom AMI (instead of the Amazon EKS-optimized AMI), enter a worker node AMI ID for your Region. If you specify a value here, it overrides any values in the NodeImageIdSSMParam field.

    • NodeVolumeSize: Specify a root volume size for your worker nodes, in GiB.

    • KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch. If you don't already have an Amazon EC2 keypair, you can create one in the AWS Management Console. For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.

      Note

      If you do not provide a keypair here, the AWS CloudFormation stack creation fails.

    • BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap script, such as extra kubelet arguments. For more information, view the bootstrap script usage information at https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh

    • VpcId: Enter the ID for the VPC that you created in Create your Amazon EKS Cluster VPC.

    • Subnets: Choose the subnets that you created in Create your Amazon EKS Cluster VPC. If you created your VPC using the steps described at Creating a VPC for Your Amazon EKS Cluster, then specify only the private subnets within the VPC for your worker nodes to launch into.

  4. Acknowledge that the stack might create IAM resources, and then choose Create stack.

  5. When your stack has finished creating, select it in the console and choose the Outputs tab.

  6. Record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

To enable worker nodes to join your cluster

  1. Download, edit, and apply the AWS authenticator configuration map:

    1. Download the configuration map with the following command:

      curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-10-08/aws-auth-cm.yaml
    2. Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

      Important

      Do not modify any other lines in this file.

      apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
    3. Apply the configuration. This command might take a few minutes to finish.

      kubectl apply -f aws-auth-cm.yaml

      Note

      If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

      If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

  2. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch
  3. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml

(Optional) To launch Windows worker nodes

Add Windows support to your cluster and launch Windows worker nodes. For more information, see Windows Support. All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to run Windows workloads in your cluster.

Next Steps

Now that you have a working Amazon EKS cluster with worker nodes, you are ready to start installing Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help you to extend the functionality of your cluster.