Amazon EKS
User Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Creating an Amazon EKS Cluster

This topic walks you through creating an Amazon EKS cluster.

If this is your first time creating an Amazon EKS cluster, we recommend that you follow one of our Getting Started with Amazon EKS guides instead. They provide complete end-to-end walkthroughs for creating an Amazon EKS cluster with worker nodes.

Important

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more information, see Managing Users or IAM Roles for your Cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

If you install and configure the AWS CLI, you can configure the IAM credentials for your user. If the AWS CLI is configured properly for your user, then eksctl and the AWS IAM Authenticator for Kubernetes can find those credentials as well. For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.

Choose the tab below that corresponds to your desired cluster creation method:

eksctlAWS Management ConsoleAWS CLI
eksctl

To create your cluster and worker nodes with eksctl

  1. Choose a tab below that matches your workload requirements. If you only intend to run Linux workloads on your cluster, choose Linux. If you want to run Linux and Windows workloads on your cluster, choose Windows.

    LinuxWindows
    Linux

    This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.7.0. You can check your version with the following command:

    eksctl version

    For more information on installing or upgrading eksctl, see Installing or Upgrading eksctl.

    Create your Amazon EKS cluster and Linux worker nodes with the following command. Replace the example values with your own values.

    Important

    Amazon EKS will deprecate Kubernetes version 1.11 on November 4th, 2019. On this day, you will no longer be able to create new 1.11 clusters, and all Amazon EKS clusters running Kubernetes version 1.11 will be updated to the latest available platform version of Kubernetes version 1.12. For more information, see Amazon EKS Version Deprecation.

    Kubernetes version 1.10 is no longer supported on Amazon EKS. You can no longer create new 1.10 clusters, and all existing Amazon EKS clusters running Kubernetes version 1.10 will eventually be automatically updated to the latest available platform version of Kubernetes version 1.11. For more information, see Amazon EKS Version Deprecation.

    Please update any 1.10 clusters to version 1.11 or higher in order to avoid service interruption. For more information, see Updating an Amazon EKS Cluster Kubernetes Version.

    eksctl create cluster \ --name prod \ --version 1.14 \ --nodegroup-name standard-workers \ --node-type t3.medium \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4 \ --node-ami auto

    Note

    For more information on the available options for eksctl create cluster, see the project README on GitHub or view the help page with the following command.

    eksctl create cluster --help

    Output:

    [ℹ] using region us-west-2 [ℹ] setting availability zones to [us-west-2b us-west-2c us-west-2d] [ℹ] subnets for us-west-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "standard-workers" will use "ami-0923e4b35a30a5f53" [AmazonLinux2/1.12] [ℹ] creating EKS cluster "prod" in "us-west-2" region [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=prod' [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] creating nodegroup stack "eksctl-prod-nodegroup-standard-workers" [✔] all EKS cluster resource for "prod" had been created [✔] saved kubeconfig as "/Users/username/.kube/config" [ℹ] adding role "arn:aws:iam::111122223333:role/eksctl-prod-nodegroup-standard-wo-NodeInstanceRole-IJP4S12W3020" to auth ConfigMap [ℹ] nodegroup "standard-workers" has 0 node(s) [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" [ℹ] nodegroup "standard-workers" has 2 node(s) [ℹ] node "ip-192-168-22-17.us-west-2.compute.internal" is not ready [ℹ] node "ip-192-168-32-184.us-west-2.compute.internal" is ready [ℹ] kubectl command should work with "/Users/username/.kube/config", try 'kubectl get nodes' [✔] EKS cluster "prod" in "us-west-2" region is ready
    Windows

    This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.7.0. You can check your version with the following command:

    eksctl version

    For more information on installing or upgrading eksctl, see Installing or Upgrading eksctl.

    Replace the example values with your own values. Save the text below to a file named cluster-spec.yaml. The configuration file is used to create a cluster and both Linux and Windows worker node groups. Even if you only want to run Windows workloads in your cluster, all Amazon EKS clusters must contain at least one Linux worker node. We recommend that you create at least two worker nodes in each node group for availability purposes. The minimum required Kubernetes version for Windows workloads is 1.14.

    --- apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: windows-prod region: us-west-2 version: '1.14' nodeGroups: - name: linux-ng instanceType: t2.large minSize: 2 - name: windows-ng instanceType: m5.large minSize: 2 volumeSize: 100 amiFamily: WindowsServer2019FullContainer

    Create your Amazon EKS cluster and Windows and Linux worker nodes with the following command.

    eksctl create cluster -f cluster-spec.yaml --install-vpc-controllers

    Note

    For more information on the available options for eksctl create cluster, see the project README on GitHub or view the help page with the following command.

    eksctl create cluster --help

    Output:

    [ℹ] using region us-west-2 [ℹ] setting availability zones to [us-west-2a us-west-2d us-west-2c] [ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-west-2d - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "linux-ng" will use "ami-076c743acc3ec4159" [AmazonLinux2/1.14] [ℹ] nodegroup "windows-ng" will use "ami-0c7f1b5f1bebccac2" [WindowsServer2019FullContainer/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "windows-cluster" in "us-west-2" region [ℹ] 2 nodegroups (linux-ng, windows-ng) were included (based on the include/exclude rules) [ℹ] will create a CloudFormation stack for cluster itself and 2 nodegroup stack(s) [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=windows-cluster' [ℹ] CloudWatch logging will not be enabled for cluster "windows-cluster" in "us-west-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --name=windows-cluster' [ℹ] 3 sequential tasks: { create cluster control plane "windows-cluster", 2 parallel sub-tasks: { create nodegroup "linux-ng", create nodegroup "windows-ng" }, install Windows VPC controller } [ℹ] building cluster stack "eksctl-windows-cluster-cluster" [ℹ] deploying stack "eksctl-windows-cluster-cluster" [ℹ] building nodegroup stack "eksctl-windows-cluster-nodegroup-linux-ng" [ℹ] building nodegroup stack "eksctl-windows-cluster-nodegroup-linux-ng" 0m[ℹ] --nodes-max=2 was set automatically for nodegroup windows-ng [ℹ] --nodes-max=2 was set automatically for nodegroup linux-ng [ℹ] deploying stack "eksctl-windows-cluster-nodegroup-windows-ng" [ℹ] deploying stack "eksctl-windows-cluster-nodegroup-linux-ng" [ℹ] created "ClusterRole.rbac.authorization.k8s.io/vpc-resource-controller" [ℹ] created "ClusterRoleBinding.rbac.authorization.k8s.io/vpc-resource-controller" [ℹ] created "kube-system:ServiceAccount/vpc-resource-controller" [ℹ] created "kube-system:Deployment.apps/vpc-resource-controller" [ℹ] created "CertificateSigningRequest.certificates.k8s.io/vpc-admission-webhook.kube-system" [ℹ] created "kube-system:secret/vpc-admission-webhook-certs" [ℹ] created "kube-system:Service/vpc-admission-webhook" [ℹ] created "kube-system:Deployment.apps/vpc-admission-webhook" [ℹ] created "kube-system:MutatingWebhookConfiguration.admissionregistration.k8s.io/vpc-admission-webhook-cfg" [✔] all EKS cluster resources for "windows-cluster" have been created [✔] saved kubeconfig as "C:\\Users\\username/.kube/config" [ℹ] adding role "arn:aws:iam::123456789012:role/eksctl-windows-cluster-nodegroup-NodeInstanceRole-ZR93IIUZSYPR" to auth ConfigMap [ℹ] nodegroup "linux-ng" has 0 node(s) [ℹ] waiting for at least 2 node(s) to become ready in "linux-ng" [ℹ] nodegroup "linux-ng" has 2 node(s) [ℹ] node "ip-192-168-8-247.us-west-2.compute.internal" is ready [ℹ] node "ip-192-168-80-253.us-west-2.compute.internal" is ready [ℹ] adding role "arn:aws:iam::123456789012:role/eksctl-windows-cluster-nodegroup-NodeInstanceRole-XM9UZN3NXBOB" to auth ConfigMap [ℹ] nodegroup "windows-ng" has 0 node(s) [ℹ] waiting for at least 2 node(s) to become ready in "windows-ng" [ℹ] nodegroup "windows-ng" has 2 node(s) [ℹ] node "ip-192-168-4-192.us-west-2.compute.internal" is ready [ℹ] node "ip-192-168-63-224.us-west-2.compute.internal" is ready [ℹ] kubectl command should work with "C:\\Users\\username/.kube/config", try 'kubectl get nodes' [✔] EKS cluster "windows-cluster" in "us-west-2" region is ready
  2. Cluster provisioning usually takes between 10 and 15 minutes. When your cluster is ready, test that your kubectl configuration is correct.

    kubectl get svc

    Note

    If you receive the error "aws-iam-authenticator": executable file not found in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see Installing aws-iam-authenticator.

    If you receive any other authorization or resource type errors, see Unauthorized or Access Denied (kubectl) in the troubleshooting section.

    Output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m
  3. (Linux GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml
AWS Management Console

To create your cluster with the console

This procedure has the following prerequisites:

  1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.

  2. Choose Create cluster.

    Note

    If your IAM user doesn't have administrative privileges, you must explicitly add permissions for that user to call the Amazon EKS API operations. For more information, see Amazon EKS Identity-Based Policy Examples.

  3. On the Create cluster page, fill in the following fields and then choose Create:

    • Cluster name – A unique name for your cluster.

    • Kubernetes version – The version of Kubernetes to use for your cluster. Unless you require a specific Kubernetes version for your application, we recommend that you use the latest version available in Amazon EKS.

      Important

      Amazon EKS will deprecate Kubernetes version 1.11 on November 4th, 2019. On this day, you will no longer be able to create new 1.11 clusters, and all Amazon EKS clusters running Kubernetes version 1.11 will be updated to the latest available platform version of Kubernetes version 1.12. For more information, see Amazon EKS Version Deprecation.

      Kubernetes version 1.10 is no longer supported on Amazon EKS. You can no longer create new 1.10 clusters, and all existing Amazon EKS clusters running Kubernetes version 1.10 will eventually be automatically updated to the latest available platform version of Kubernetes version 1.11. For more information, see Amazon EKS Version Deprecation.

      Please update any 1.10 clusters to version 1.11 or higher in order to avoid service interruption. For more information, see Updating an Amazon EKS Cluster Kubernetes Version.

    • Role name – Choose the Amazon EKS service role to allow Amazon EKS and the Kubernetes control plane to manage AWS resources on your behalf. For more information, see Amazon EKS IAM Roles.

    • VPC – The VPC to use for your cluster.

    • Subnets – The subnets within the preceding VPC to use for your cluster. By default, the available subnets in the VPC are preselected. Specify all subnets that will host resources for your cluster (such as private subnets for worker nodes and public subnets for load balancers). Your subnets must meet the requirements for an Amazon EKS cluster. For more information, see Cluster VPC Considerations.

    • Security Groups – Specify one or more (up to a limit of five) security groups within the preceding VPC to apply to the cross-account elastic network interfaces for your cluster. Your cluster and worker node security groups must meet the requirements for an Amazon EKS cluster. For more information, see Cluster Security Group Considerations.

      Important

      The worker node AWS CloudFormation template modifies the security group that you specify here, so Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). If this security group is shared with other resources, you might block or disrupt connections to those resources.

    • Endpoint private access – Choose whether to enable or disable private access for your cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API requests that originate from within your cluster's VPC use the private VPC endpoint. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Endpoint public access – Choose whether to enable or disable public access for your cluster's Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server can receive only requests from within the cluster VPC. For more information, see Amazon EKS Cluster Endpoint Access Control.

    • Logging – For each individual log type, choose whether the log type should be Enabled or Disabled. By default, each log type is Disabled. For more information, see Amazon EKS Control Plane Logging.

    • Tags – (Optional) Add any tags to your cluster. For more information, see Tagging Your Amazon EKS Resources.

    Note

    You might receive an error that one of the Availability Zones in your request doesn't have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see Insufficient Capacity.

  4. On the Clusters page, choose the name of your new cluster to view the cluster information.

  5. The Status field shows CREATING until the cluster provisioning process completes. When your cluster provisioning is complete (usually between 10 and 15 minutes), note the API server endpoint and Certificate authority values. These are used in your kubectl configuration.

  6. Now that you have created your cluster, follow the procedures in Installing aws-iam-authenticator and Create a kubeconfig for Amazon EKS to enable communication with your new cluster.

  7. After you enable communication, follow the procedures in Launching Amazon EKS Linux Worker Nodes to add Linux worker nodes to your cluster to support your workloads.

  8. (Optional) After you add Linux worker nodes to your cluster, follow the procedures in Windows Support to add Windows support to your cluster and to add Windows worker nodes. All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to run Windows workloads in your cluster.

AWS CLI

To create your cluster with the AWS CLI

This procedure has the following prerequisites:

  1. Create your cluster with the following command. Substitute your cluster name, the Amazon Resource Name (ARN) of your Amazon EKS service role that you created in Create your Amazon EKS Service Role, and the subnet and security group IDs for the VPC that you created in Create your Amazon EKS Cluster VPC.

    Important

    Amazon EKS will deprecate Kubernetes version 1.11 on November 4th, 2019. On this day, you will no longer be able to create new 1.11 clusters, and all Amazon EKS clusters running Kubernetes version 1.11 will be updated to the latest available platform version of Kubernetes version 1.12. For more information, see Amazon EKS Version Deprecation.

    Kubernetes version 1.10 is no longer supported on Amazon EKS. You can no longer create new 1.10 clusters, and all existing Amazon EKS clusters running Kubernetes version 1.10 will eventually be automatically updated to the latest available platform version of Kubernetes version 1.11. For more information, see Amazon EKS Version Deprecation.

    Please update any 1.10 clusters to version 1.11 or higher in order to avoid service interruption. For more information, see Updating an Amazon EKS Cluster Kubernetes Version.

    aws eks --region region create-cluster --name devel --kubernetes-version 1.14 \ --role-arn arn:aws:iam::111122223333:role/eks-service-role-AWSServiceRoleForAmazonEKS-EXAMPLEBKZRQR \ --resources-vpc-config subnetIds=subnet-a9189fe2,subnet-50432629,securityGroupIds=sg-f5c54184

    Important

    If you receive a syntax error similar to the following, you might be using a preview version of the AWS CLI for Amazon EKS. The syntax for many Amazon EKS commands has changed since the public service launch. Update your AWS CLI version to the latest available and delete the custom service model directory at ~/.aws/models/eks.

    aws: error: argument --cluster-name is required

    Note

    If your IAM user doesn't have administrative privileges, you must explicitly add permissions for that user to call the Amazon EKS API operations. For more information, see Amazon EKS Identity-Based Policy Examples.

    Output:

    { "cluster": { "name": "devel", "arn": "arn:aws:eks:us-west-2:111122223333:cluster/devel", "createdAt": 1527785885.159, "version": "1.14", "roleArn": "arn:aws:iam::111122223333:role/eks-service-role-AWSServiceRoleForAmazonEKS-AFNL4H8HB71F", "resourcesVpcConfig": { "subnetIds": [ "subnet-a9189fe2", "subnet-50432629" ], "securityGroupIds": [ "sg-f5c54184" ], "vpcId": "vpc-a54041dc", "endpointPublicAccess": true, "endpointPrivateAccess": false }, "status": "CREATING", "certificateAuthority": {} } }

    Note

    You might receive an error that one of the Availability Zones in your request doesn't have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see Insufficient Capacity.

  2. Cluster provisioning usually takes between 10 and 15 minutes. You can query the status of your cluster with the following command. When your cluster status is ACTIVE, you can proceed.

    aws eks --region region describe-cluster --name devel --query "cluster.status"
  3. When your cluster provisioning is complete, retrieve the endpoint and certificateAuthority.data values with the following commands. You must add these values to your kubectl configuration so that you can communicate with your cluster.

    1. Retrieve the endpoint.

      aws eks --region region describe-cluster --name devel --query "cluster.endpoint" --output text
    2. Retrieve the certificateAuthority.data.

      aws eks --region region describe-cluster --name devel --query "cluster.certificateAuthority.data" --output text
  4. Now that you have created your cluster, follow the procedures in Installing aws-iam-authenticator and Create a kubeconfig for Amazon EKS to enable communication with your new cluster.

  5. After you enable communication, follow the procedures in Launching Amazon EKS Linux Worker Nodes to add worker nodes to your cluster to support your workloads.

  6. (Optional) After you add Linux worker nodes to your cluster, follow the procedures in Windows Support to add Windows support to your cluster and to add Windows worker nodes. All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to run Windows workloads in your cluster.