Amazon EKS nodes on AWS Outposts - Amazon EKS

Amazon EKS nodes on AWS Outposts

You can create and run Amazon EKS nodes on AWS Outposts. AWS Outposts enables native AWS services, infrastructure, and operating models in on-premises facilities for low latency, local data processing, and data residency needs. In AWS Outposts environments, you can use the same AWS APIs, tools, and infrastructure that you use in the AWS Cloud. For more information about AWS Outposts, see the AWS Outposts User Guide.

You can use Amazon EKS to run Kubernetes applications on-premises with AWS Outposts. Amazon EKS on AWS Outposts supports extended clusters, with the Kubernetes control plane running in the parent AWS Region, and worker nodes running on AWS Outposts. The Kubernetes control plane is fully managed by AWS, and you can use the same Amazon EKS APIs, tools, and console to create and run Amazon EKS worker nodes on AWS Outposts.


               Outposts Configuration

Prerequisites

The following are the prerequisites for using Amazon EKS nodes on AWS Outposts:

  • You must have installed and configured an Outpost in your on-premises data center. For more information, see Create an Outpost and order Outpost capacity in the AWS Outposts User Guide.

  • You must have a reliable network connection between your Outpost and its parent AWS Region. We recommend that you provide highly available, low-latency connectivity between your Outpost and its parent AWS Region. For more information, see Outpost connectivity to the local network in the AWS Outposts User Guide.

  • The AWS Region for the Outpost must support Amazon EKS. For a list of supported AWS Regions, see Amazon EKS service endpoints in the AWS General Reference.

Outpost considerations

Architecture

  • AWS Outposts are available in a variety of form factors including 1U and 2U Outposts servers and 42U Outposts racks. Amazon EKS is supported on the 42U Outposts racks only.

  • A single subnet cannot span multiple logical Outposts, and inter-Outposts traffic must use customer-owned IP addresses and traverse the local network. Because of this, it is recommended to run a single Amazon EKS cluster per logical Outpost.

  • Traffic from Amazon EKS worker nodes running on AWS Outposts to the control plane in an AWS Region stays within your VPC and traverses the service link connection. You can use private or public connectivity for your service link connection. For more information about the service link, see Outposts Connectivity to AWS Regions in the AWS Outposts User Guide.

  • If network connectivity between your Outpost and its parent AWS Region is lost, your Amazon EKS worker nodes will continue to run. However, you cannot create new nodes or perform mutating management actions on existing deployments until connectivity is restored. The recommended course of action during network disconnects is to attempt to reconnect your AWS Outposts to the parent AWS Region following the network connectivity checklist. In case of instance failures during periods of network disconnect, the instances will not be replaced automatically. The Amazon EKS Kubernetes control plane runs in the parent AWS Region, and missing kubelet heartbeats can lead to the following:

    • Pods on the AWS Outposts being marked as unhealthy

    • the Node status times out

    • the Pods will be marked for eviction

    For more information, see Node Controller in the Kubernetes documentation.

Operational

  • When you create your Amazon EKS cluster, you must use subnets that run in the AWS Region because these are used for the creation of the Amazon EKS Kubernetes control plane. During cluster creation, do not use subnets that run on your AWS Outposts.

  • You can run self-managed nodes on AWS Outposts only, managed nodes and Fargate are not supported. When you create a self-managed node group, you must pass the subnets that exist on your AWS Outposts.

  • The Amazon EKS worker nodes’ VPC must be associated with an LGW route table for Services running on the worker nodes to be accessible over the local network. If you are using customer-owned IPs, then the subnet in which the worker nodes run must have a route to the LGW.

  • IPv6 cannot be used for service link or local network traffic on AWS Outposts. You shouldn't create your Amazon EKS cluster with the IPv6 IP family if you plan to run Amazon EKS worker nodes on AWS Outposts.

  • The Kubernetes components for Amazon EKS worker nodes are pulled from Amazon Elastic Container Registry (Amazon ECR) in the parent AWS Region. When bootstrapping new Amazon EC2 instances to your Amazon EKS cluster, expect an increase in traffic over your service link connection.

  • You can use Amazon ECR in the parent AWS Region to host your application container images. The size of your application container images will affect the service link bandwidth usage and the startup time when deploying new Pods that are not already cached. If you need to reduce your application deployment time, consider hosting a local container registry or cache for your application container images. If you have “isolated” subnets that do not have a path to the internet, you must set up VPC Endpoints for Amazon S3 and Amazon ECR to create Amazon EKS worker nodes on AWS Outposts.

  • When creating your Amazon EKS worker nodes, you must use the gp2 volume type.

Application

  • With AWS Outposts, you can run some AWS services locally, and you can connect to a broad range of services available in the parent AWS Region. The locally available AWS services will have lower latency response times compared to those running in the parent AWS Region. The AWS services that you use in the AWS Region will not be available during periods of network disconnect to the parent AWS Region. For information on the AWS services available locally, see the AWS Outposts User Guide .

  • You can connect to your applications running on Amazon EKS on AWS Outposts over your local network using the normal methods for Kubernetes Services and Ingress . Application Load Balancer (ALB) is available for Kubernetes Ingress on Outposts racks. Network Load Balancer (Network Load Balancer) is not available on AWS Outposts. A common practice to conserve capacity on AWS Outposts is to use a single ALB deployment with path-based routing for each Kubernetes Service. For more information, see the Application Load Balancer documentation .

  • Amazon EBS volumes do not span physical or logical AWS Outposts, similar to the multi-Availability Zone behavior in AWS Regions. For example, if a Node running on physical Outpost A moves to physical Outpost B, the EBS volume will not move with it. If you are running stateful workloads on Amazon EKS with Pods backed by EBS volumes, consider implementing dual writes at the application layer or using an alternative storage mechanism if your application and its data must remain available during single-rack failures.

Deploy an Amazon EKS cluster with worker nodes on AWS Outposts

This section describes how to create an Amazon EKS cluster and deploy Amazon EKS worker nodes on AWS Outposts using eksctl and the AWS CLI. You must have permissions to create and manage subnets on AWS Outposts. Click here to learn more about shareable Outpost resourceshttps://docs.aws.amazon.com/outposts/latest/userguide/sharing-outposts.html#sharing-resources.

  1. Create a public Amazon EKS cluster using a YAML file:

    eksctl create cluster -f cluster-config.yaml

    An example YAML file is shown below:

    apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: my-outposts-cluster version: 1.21 region: region-code cloudWatch: clusterLogging: enableTypes: [ "api", "audit", "authenticator", "controllerManager", "scheduler" ] iam: withOIDC: true addons: - name: vpc-cni - name: coredns - name: kube-proxy

    To create a fully private cluster, add the following to your config YAML file:

    privateCluster: enabled: true

    The last line of your output should looks similar to this:

    EKS cluster "my-outposts-cluster" in "region-code" region is ready

    You can confirm your cluster has been created in the Amazon EKS AWS Management Console or the command:

    eksctl get cluster --name my-outposts-cluster
  2. Identify the VPC created with your new cluster. This VPC will host the subnet that contains your worker nodes. You can find your vpc-id with the command:

    eksctl get cluster --name my-outposts-cluster
  3. Identify the subnet and CIDR for your Outpost. The CIDR shouldn't conflict with other IP addresses in use on your local network. This can be done by using a subnet calculator and determining the appropriate settings for your environment. Once you have calculated the CIDR, you can use this value in the next step. Your availability-zone is visible in the AWS Management Console.

  4. Create a subnet on AWS Outposts using AWS CLI to host your worker nodes. Use the vpc-id that you retrieved in the previous step in the following command:

    aws ec2 create-subnet \ --region region-code \ --availability-zone region-code \ --outpost-arn my-outpost-arn \ --vpc-id my-vpc-id \ --cidr-block my-cidr-block

    Note the subnet id that appears in the message after creation. The line in the output should look similar to the following:

    Your output should look similar to this:

    "SubnetId": "subnet-1234567890abcdef0",

    You can verify the subnet creation in the AWS Management Console under your cluster details.

  5. Create worker nodes by using a YAML file. Replace the vpc-id with the value created in step 1 and subnet-id created in step 2. The security-group-id was created in step 1 and can be retrieved with the command:

    eksctl get cluster --name my-outposts-cluster

    The nodes can be created with a YAML file by using the command:

    eksctl create nodegroup -f worker-nodes-config.yaml

    The following is an example YAML file to include in this step. For instanceType, select the instance size that is:

    • slotted on your Outpost

    • available for your workload

    apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: my-outposts-cluster region: region-code nodeGroups: - name: outpost-worker-nodes instanceType: m5.large desiredCapacity: 1 minSize: 1 maxSize: 1 volumeSize: 50 volumeType: gp2 volumeEncrypted: true subnets: - subnet-042e2c531a5713a5b privateNetworking: true

    To create private worker nodes, add this to the end of the YAML file:

    subnets: - outpost-subnet volumeType: gp2 privateNetworking: true

    The last lines of your output should look similar to this:

    created 1 nodegroup(s) in cluster "my-outposts-cluster" created 0 managed nodegroup(s) in cluster "my-outposts-cluster" checking security group configuration for all nodegroups
  6. (Optional) To expose worker nodes over your local network, see How local gateways work.

Now that you have a configured Amazon EKS cluster with worker nodes running on AWS Outposts, you can install add-ons and deploying applications to your cluster. See the following for more information on how to extend your cluster's functionality: