Deploy and debug Amazon EKS clusters - AWS Prescriptive Guidance

Deploy and debug Amazon EKS clusters

Created by Svenja Raether (AWS) and Mathew George (AWS)

Environment: PoC or pilot

Technologies: Containers & microservices; Infrastructure; Modernization; Serverless; CloudNative

Workload: All other workloads

AWS services: Amazon EKS; AWS Fargate

Summary

Containers are becoming an essential part of cloud native application development. Kubernetes provides an efficient way to manage and orchestrate containers. Amazon Elastic Kubernetes Service (Amazon EKS) is a fully-managed, certified Kubernetes conformant service for building, securing, operating, and maintaining Kubernetes clusters on Amazon Web Services (AWS). It supports running pods on AWS Fargate to provide on-demand, right-sized compute capacity.

It’s important for developers and administrators to know debugging options when running containerized workloads. This pattern walks you through deploying and debugging containers on Amazon EKS with AWS Fargate. It includes creating, deploying, accessing, debugging, and cleaning up the Amazon EKS workloads.

Prerequisites and limitations

Prerequisites 

Limitations

  • This pattern provides developers with useful debugging practices for development environments. It does not state best practices for production environments.

  • If you are running Windows, use your operating system–specific commands for setting the environment variables.

Product versions used 

Architecture

Technology stack  

  • Application Load Balancer

  • Amazon EKS

  • AWS Fargate

Target architecture 

All resources shown in the diagram are provisioned by using eksctl and kubectl commands issued from a local machine. Private clusters must be run from an instance that is inside the private VPC.

The target architecture consists of an EKS cluster using the Fargate launch type. This provides on-demand, right-sized compute capacity without the need to specify server types. The EKS cluster has a control plane, which is used to manage the cluster nodes and workloads. The pods are provisioned into private VPC subnets spanning multiple Availability Zones. The Amazon ECR Public Gallery is referenced to retrieve and deploy an NGINX web server image to the cluster's pods.

The diagram shows how to access the Amazon EKS control plane using by kubectl commands and how to access the application by using the Application Load Balancer.

.

Four-step process with Amazon EKS control plane and Fargate profile with nodes in separate VPCs.
  1. A local machine outside the AWS Cloud sends commands to the Kubernetes control plane inside an Amazon EKS managed VPC.

  2. Amazon EKS schedules pods based on the selectors in the Fargate profile.

  3. The local machine opens the Application Load Balancer URL in the browser.

  4. The Application Load Balancer divides traffic between the Kubernetes pods in Fargate cluster nodes deployed in private subnets spanning multiple Availability Zones.

Tools

AWS services

  • Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service that’s secure, scalable, and reliable.

  • Amazon Elastic Kubernetes Service (Amazon EKS) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes. This pattern also uses the eksctl command-line tool to work with Kubernetes clusters on Amazon EKS.

  • AWS Fargate helps you run containers without needing to manage servers or Amazon Elastic Compute Cloud (Amazon EC2) instances. It’s used in conjunction with Amazon Elastic Container Service (Amazon ECS).

  • Elastic Load Balancing (ELB) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones. This pattern uses the AWS Load Balancer Controller controlling component to create the Application Load Balancer when a Kubernetes ingress is provisioned. The Application Load Balancer distributes incoming traffic among multiple targets.

Other tools

  • Helm is an open-source package manager for Kubernetes. In this pattern, Helm is used to install the AWS Load Balancer Controller.

  • Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

  • NGINX is a high-performance web and reverse proxy server.

Epics

TaskDescriptionSkills required

Create the files.

Using the code in the Additional information section, create the following files:

  • clusterconfig-fargate.yaml

  • nginx-deployment.yaml

  • nginx-service.yaml

  • nginx-ingress.yaml

  • index.html

App developer, AWS administrator, AWS DevOps

Set environment variables.

Note: If a command fails because of previous unfinished tasks, wait a few seconds, and then run the command again.

This pattern uses the AWS Region and cluster name that are defined in the file clusterconfig-fargate.yaml. Set the same values as environment variables to reference them in further commands.

export AWS_REGION="us-east-1" export CLUSTER_NAME="my-fargate"
App developer, AWS DevOps, AWS systems administrator

Create an EKS cluster.

To create an EKS cluster that uses the specifications from the clusterconfig-fargate.yaml file, run the following command.

eksctl create cluster -f clusterconfig-fargate.yaml

The file contains the ClusterConfig, which provisions a new EKS cluster named my-fargate-cluster in the us-east-1 Region and one default Fargate profile (fp-default).

The default Fargate profile is configured with two selectors (default and kube-system).

App developer, AWS DevOps, AWS administrator

Check the created cluster.

To check the created cluster, run the following command.

eksctl get cluster --output yaml

The output should be the following.

- Name: my-fargate Owned: "True" Region: us-east-1

Check the created Fargate profile by using the CLUSTER_NAME.

eksctl get fargateprofile --cluster $CLUSTER_NAME --output yaml

This command displays information about the resources. You can use the information to verify the created cluster. The output should be the following.

- name: fp-default podExecutionRoleARN: arn:aws:iam::<YOUR-ACCOUNT-ID>:role/eksctl-my-fargate-cluster-FargatePodExecutionRole-xxx selectors: - namespace: default - namespace: kube-system status: ACTIVE subnets: - subnet-aaa - subnet-bbb - subnet-ccc
App developer, AWS DevOps, AWS systems administrator
TaskDescriptionSkills required

Deploy the NGINX web server.

To apply the NGINX web server deployment on the cluster, run the following command.

kubectl apply -f ./nginx-deployment.yaml

The output should be the following.

deployment.apps/nginx-deployment created

The deployment includes three replicas of the NGINX image taken from the Amazon ECR Public Gallery. The image is deployed to the default namespace and exposed on port 80 on the running pods.

App developer, AWS DevOps, AWS systems administrator

Check the deployment and pods.

(Optional) Check the deployment. You can verify the status of your deployment with the following command.

kubectl get deployment

The output should be the following.

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 7m14s

A pod is a deployable object in Kubernetes, containing one or more containers. To list all pods, run the following command. 

kubectl get pods

The output should be the following.

NAME READY STATUS RESTARTS AGE nginx-deployment-xxxx-aaa 1/1 Running 0 94s nginx-deployment-xxxx-bbb 1/1 Running 0 94s nginx-deployment-xxxx-ccc 1/1 Running 0 94s
App developer, AWS DevOps, AWS administrator

Scale the deployment.

To scale the deployment from the three replicas that were specified in deployment.yaml to four replicas, use the following command. 

kubectl scale deployment nginx-deployment --replicas 4

The output should be the following.

deployment.apps/nginx-deployment scaled
App developer, AWS DevOps, AWS systems administrator
TaskDescriptionSkills required

Set environment variables.

Describe the cluster’s CloudFormation stack to retrieve information about its VPC.

aws cloudformation describe-stacks --stack-name eksctl-$CLUSTER_NAME-cluster --query "Stacks[0].Outputs[?OutputKey==\`VPC\`].OutputValue"

The output should be the following.

[ "vpc-<YOUR-VPC-ID>" ]

Copy the VPC ID and export it as an environment variable.

export VPC_ID="vpc-<YOUR-VPC-ID>"
App developer, AWS DevOps, AWS systems administrator

Configure IAM for the cluster service account.

Use the AWS_REGION and CLUSTER_NAME from the earlier epic to create an IAM Open ID Connect provider for the cluster.

eksctl utils associate-iam-oidc-provider \ --region $AWS_REGION \ --cluster $CLUSTER_NAME \ --approve
App developer, AWS DevOps, AWS systems administrator

Download and create the IAM policy.

Download the IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf.

curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json

Create the policy in your AWS account by using the AWS CLI.

aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam-policy.json

You should see the following output.

{ "Policy": { "PolicyName": "AWSLoadBalancerControllerIAMPolicy", "PolicyId": "<YOUR_POLICY_ID>", "Arn": "arn:aws:iam::<YOUR-ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "<YOUR-DATE>", "UpdateDate": "<YOUR-DATE>" } }

Save the Amazon Resource Name (ARN) of the policy as $POLICY_ARN.

export POLICY_ARN=”arn:aws:iam::<YOUR-ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy”
App developer, AWS DevOps, AWS systems administrator

Create an IAM service account.

Create an IAM service account named aws-load-balancer-controller in the kube-system namespace. Use the CLUSTER_NAME, AWS_REGION, and POLICY_ARN that you previously configured.

eksctl create iamserviceaccount \ --cluster=$CLUSTER_NAME \ --region=$AWS_REGION \ --attach-policy-arn=$POLICY_ARN \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --override-existing-serviceaccounts \ --approve

Verify the creation.

eksctl get iamserviceaccount \ --cluster $CLUSTER_NAME \ --name aws-load-balancer-controller \ --namespace kube-system \ --output yaml

The output should be the following.

- metadata: name: aws-load-balancer-controller namespace: kube-system status: roleARN: arn:aws:iam::<YOUR-ACCOUNT-ID>:role/eksctl-my-fargate-addon-iamserviceaccount-ku-Role1-<YOUR-ROLE-ID> wellKnownPolicies: autoScaler: false awsLoadBalancerController: false certManager: false ebsCSIController: false efsCSIController: false externalDNS: false imageBuilder: false
App developer, AWS DevOps, AWS systems administrator

Install the AWS Load Balancer Controller.

Update the Helm repository.

helm repo update

Add the Amazon EKS chart repository to the Helm repo. 

helm repo add eks https://aws.github.io/eks-charts

Apply the Kubernetes custom resource definitions (CRDs) that are used by the AWS Load Balancer Controller eks-chart in the background.

kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

The output should be the following.

customresourcedefinition.apiextensions.k8s.io/ingressclassparams.elbv2.k8s.aws created customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws created

Install the Helm chart, using the environment variables that you set previously.

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ --set clusterName=$CLUSTER_NAME \ --set serviceAccount.create=false \ --set region=$AWS_REGION \ --set vpcId=$VPC_ID \ --set serviceAccount.name=aws-load-balancer-controller \ -n kube-system

The output should be the following.

NAME: aws-load-balancer-controller LAST DEPLOYED: <YOUR-DATE> NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: AWS Load Balancer controller installed!
App developer, AWS DevOps, AWS systems administrator

Create an NGINX service.

Create a service to expose the NGINX pods by using the nginx-service.yaml file.

kubectl apply -f nginx-service.yaml

The output should be the following.

service/nginx-service created
App developer, AWS DevOps, AWS systems administrator

Create the Kubernetes ingress resource.

Create a service to expose the Kubernetes NGINX ingress by using the nginx-ingress.yaml file.

kubectl apply -f nginx-ingress.yaml

The output should be the following.

ingress.networking.k8s.io/nginx-ingress created
App developer, AWS DevOps, AWS systems administrator

Get the load balancer URL.

To retrieve the ingress information, use the following command.

kubectl get ingress nginx-ingress

The output should be the following.

NAME CLASS HOSTS ADDRESS PORTS AGE nginx-ingress <none> * k8s-default-nginxing-xxx.us-east-1.elb.amazonaws.com 80 80s

Copy the ADDRESS (for example, k8s-default-nginxing-xxx.us-east-1.elb.amazonaws.com) from the output, and paste it into your browser to access the index.html file.

App developer, AWS DevOps, AWS systems administrator
TaskDescriptionSkills required

Select a pod.

List all pods, and copy the desired pod's name. 

kubectl get pods

The output should be the following.

NAME READY STATUS RESTARTS AGE nginx-deployment-xxxx-aaa 1/1 Running 0 55m nginx-deployment-xxxx-bbb 1/1 Running 0 55m nginx-deployment-xxxx-ccc 1/1 Running 0 55m nginx-deployment-xxxx-ddd 1/1 Running 0 42m

This command lists the existing pods and additional information.

If you are interested in a specific pod, fill in the name of the pod you are interested in for the POD_NAME variable or set it as an environment variable. Otherwise, omit this parameter to look up all resources.

export POD_NAME="nginx-deployment-<YOUR-POD-NAME>"
App developer, AWS DevOps, AWS systems administrator

Access the logs.

Get the logs from the pod that you want to debug.

kubectl logs $POD_NAME
App developer, AWS systems administrator, AWS DevOps

Forward the NGINX port.

Use port-forwarding to map the pod's port for accessing the NGINX web server to a port on your local machine.

kubectl port-forward deployment/nginx-deployment 8080:80

In your browser, open the following URL.

http://localhost:8080

The port-forward command provides access to the index.html file without making it publicly available over a load balancer. This is useful for accessing the running application while debugging it. You can stop the port-forwarding by pressing the keyboard command Ctrl+C.

App developer, AWS DevOps, AWS systems administrator

Run commands within the pod.

To look at the current index.html file, use the following command. 

kubectl exec $POD_NAME -- cat /usr/share/nginx/html/index.html

You can use the exec command to issue any command directly in the pod. This is useful for debugging running applications.

App developer, AWS DevOps, AWS systems administrator

Copy files to a pod.

Remove the default index.html file on this pod.

kubectl exec $POD_NAME -- rm /usr/share/nginx/html/index.html

Upload the customized local file index.html to the pod.

kubectl cp index.html $POD_NAME:/usr/share/nginx/html/

You can use the cp command to change or add files directly to any of the pods.

App developer, AWS DevOps, AWS systems administrator

Use port-forwarding to display the change.

Use port-forwarding to verify the changes that you made to this pod.

kubectl port-forward pod/$POD_NAME 8080:80

Open the following URL in your browser.

http://localhost:8080

The applied changes to the index.html file should be visible in the browser.

App developer, AWS DevOps, AWS systems administrator
TaskDescriptionSkills required

Delete the load balancer.

Delete the ingress.

kubectl delete ingress/nginx-ingress

The output should be the following.

ingress.networking.k8s.io "nginx-ingress" deleted

Delete the service.

kubectl delete service/nginx-service

The output should be the following.

service "nginx-service" deleted

Delete the load balancer controller.

helm delete aws-load-balancer-controller -n kube-system

The output should be the following.

release "aws-load-balancer-controller" uninstalled

Delete the service account.

eksctl delete iamserviceaccount --cluster $CLUSTER_NAME --namespace kube-system --name aws-load-balancer-controller
App developer, AWS DevOps, AWS systems administrator

Delete the deployment.

To delete the deployment resources, use the following command.

kubectl delete deploy/nginx-deployment

The output should be the following.

deployment.apps "nginx-deployment" deleted
App developer, AWS DevOps, AWS systems administrator

Delete the cluster.

Delete the EKS cluster by using the following command, where my-fargate is the cluster name.

eksctl delete cluster --name $CLUSTER_NAME

This command deletes the entire cluster, including all associated resources.

App developer, AWS DevOps, AWS systems administrator

Delete the IAM policy.

Delete the previously created policy by using the AWS CLI.

aws iam delete-policy --policy-arn $POLICY_ARN
App developer, AWS administrator, AWS DevOps

Troubleshooting

IssueSolution

You receive an error message upon cluster creation stating that your targeted Availability Zone doesn't have sufficient capacity to support the cluster. You should see a message similar to the following.

Cannot create cluster 'my-fargate' because us-east-1e, the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f

Create the cluster again using the recommended Availability Zones from the error message. Specify a list of Availability Zones in the last line of your clusterconfig-fargate.yaml file (for example, availabilityZones: ["us-east-1a", "us-east-1b", "us-east-1c"]).

Related resources

Additional information

clusterconfig-fargate.yaml

apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: my-fargate region: us-east-1 fargateProfiles: - name: fp-default selectors: - namespace: default - namespace: kube-system

nginx-deployment.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: "nginx-deployment" namespace: "default" spec: replicas: 3 selector: matchLabels: app: "nginx" template: metadata: labels: app: "nginx" spec: containers: - name: nginx image: public.ecr.aws/nginx/nginx:latest ports: - containerPort: 80

nginx-service.yaml

apiVersion: v1 kind: Service metadata: annotations: alb.ingress.kubernetes.io/target-type: ip name: "nginx-service" namespace: "default" spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app: "nginx"

nginx-ingress.yaml

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: "default" name: "nginx-ingress" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: "nginx-service" port: number: 80

index.html

<!DOCTYPE html> <html> <body> <h1>Welcome to your customized nginx!</h1> <p>You modified the file on this running pod</p> </body> </html>