Deploy an Amazon EKS cluster on AWS Outposts
This topic provides an overview of what to consider when running a local cluster on an Outpost. The topic also provides instructions for how to deploy a local cluster on an Outpost.
Important
-
These considerations aren’t replicated in related Amazon EKS documentation. If other Amazon EKS documentation topics conflict with the considerations here, follow the considerations here.
-
These considerations are subject to change and might change frequently. So, we recommend that you regularly review this topic.
-
Many of the considerations are different than the considerations for creating a cluster on the AWS Cloud.
-
Local clusters support Outpost racks only. A single local cluster can run across multiple physical Outpost racks that comprise a single logical Outpost. A single local cluster can’t run across multiple logical Outposts. Each logical Outpost has a single Outpost ARN.
-
Local clusters run and manage the Kubernetes control plane in your account on the Outpost. You can’t run workloads on the Kubernetes control plane instances or modify the Kubernetes control plane components. These nodes are managed by the Amazon EKS service. Changes to the Kubernetes control plane don’t persist through automatic Amazon EKS management actions, such as patching.
-
Local clusters support self-managed add-ons and self-managed Amazon Linux node groups. The Amazon VPC CNI plugin for Kubernetes, kube-proxy, and CoreDNS add-ons are automatically installed on local clusters.
-
Local clusters require the use of Amazon EBS on Outposts. Your Outpost must have Amazon EBS available for the Kubernetes control plane storage.
-
Local clusters use Amazon EBS on Outposts. Your Outpost must have Amazon EBS available for the Kubernetes control plane storage. Outposts support Amazon EBS
gp2
volumes only. -
Amazon EBS backed Kubernetes
PersistentVolumes
are supported using the Amazon EBS CSI driver. -
The control plane instances of local clusters are set up in stacked highly available topology
. Two out of the three control plane instances must be healthy at all times to maintain quorum. If quorum is lost, contact AWS support, as some service-side actions will be required to enable the new managed instances.
Prerequisites
-
Familiarity with the Outposts deployment options, Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations, and VPC requirements and considerations.
-
An existing Outpost. For more information, see What is AWS Outposts.
-
The
kubectl
command line tool is installed on your computer or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is1.29
, you can usekubectl
version1.28
,1.29
, or1.30
with it. To install or upgradekubectl
, see Set up kubectl and eksctl. -
Version
2.12.3
or later or version1.27.160
or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, useaws --version | cut -d / -f2 | cut -d ' ' -f1
. Package managers suchyum
,apt-get
, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see Installing and Quick configuration with aws configure in the AWS Command Line Interface User Guide. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see Installing AWS CLI to your home directory in the AWS CloudShell User Guide. -
An IAM principal (user or role) with permissions to
create
anddescribe
an Amazon EKS cluster. For more information, see Create a local Kubernetes cluster on an Outpost and List or describe all clusters.
When a local Amazon EKS cluster is created, the IAM principal that creates the cluster is permanently added. The principal is specifically added to the Kubernetes RBAC authorization table as the administrator. This entity has system:masters
permissions. The identity of this entity isn’t visible in your cluster configuration. So, it’s important to note the entity that created the cluster and make sure that you never delete it. Initially, only the principal that created the server can make calls to the Kubernetes API server using kubectl
. If you use the console to create the cluster, make sure that the same IAM credentials are in the AWS SDK credential chain when you run kubectl
commands on your cluster. After your cluster is created, you can grant other IAM principals access to your cluster.