Deploy a sample Linux workload - Amazon EKS

Deploy a sample Linux workload

In this topic, you create a Kubernetes manifest and deploy it to your cluster.


  • You must have an existing Kubernetes cluster to deploy a sample application. If you don't have an existing cluster, you can deploy an Amazon EKS cluster using one of the Getting started with Amazon EKS guides.

  • You must have kubectl installed on your computer. For more information, see Installing kubectl.

  • kubectl must be configured to communicate with your cluster. For more information, see Create a kubeconfig for Amazon EKS.

To deploy a sample application

  1. Create a Kubernetes namespace for the sample app.

    kubectl create namespace <my-namespace>
  2. Create a Kubernetes service and deployment.

    1. Save the following contents to a file named sample-service.yaml on your computer. If you're deploying to AWS Fargate pods, then make sure that the value for namespace matches the namespace that you defined in your AWS Fargate profile. This sample deployment will pull a container image from a public repository, deploy three replicas of it to your cluster, and create a Kubernetes service with its own IP address that can be accessed from within the cluster only. To access the service from outside the cluster, you need to deploy a network load balancer or ALB Ingress Controller.

      The image is a multi-architecture image, so if your cluster includes both x86 and Arm nodes, then the pod can be scheduled on either type of hardware architecture. Kubernetes will deploy the appropriate hardware image based on the hardware type of the node it schedules the pod on. Alternatively, if you only want the deployment to run on nodes with a specific hardware architecture, or your cluster only contains one hardware architecture, then remove either amd64 or arm64 from the example that follows.

      apiVersion: v1 kind: Service metadata: name: my-service namespace: my-namespace labels: app: my-app spec: selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment namespace: my-namespace labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: operator: In values: - amd64 - arm64 containers: - name: nginx image: ports: - containerPort: 80

      To learn more about Kubernetes services and deployments, see the Kubernetes documentation. The containers in the sample manifest do not use network storage, but they may be able to. For more information, see Storage. Though not implemented in this example, we recommend that you create Kubernetes service accounts for your pods, and associate them to AWS IAM accounts. Specifying service accounts enables your pods to have the minimum permissions that they require to interact with other services. For more information, see IAM roles for service accounts

    2. Deploy the application.

      kubectl apply -f <sample-service.yaml>
  3. View all resources that exist in the my-namespace namespace.

    kubectl get all -n my-namespace


    NAME READY STATUS RESTARTS AGE pod/my-deployment-776d8f8fd8-78w66 1/1 Running 0 27m pod/my-deployment-776d8f8fd8-dkjfr 1/1 Running 0 27m pod/my-deployment-776d8f8fd8-wmqj6 1/1 Running 0 27m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-service ClusterIP <none> 80/TCP 32m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-deployment 3/3 3 3 27m NAME DESIRED CURRENT READY AGE replicaset.apps/my-deployment-776d8f8fd8 3 3 3 27m

    In the output, you see the service and deployment that are specified in the sample manifest deployed in the previous step. You also see three pods, which are due to specifying 3 for replicas in the sample manifest. For more information about pods, see Pods in the Kubernetes documentation. Kubernetes automatically created the replicaset resource, even though it wasn't specified in the sample manifest. For more information about ReplicaSets, see ReplicaSet in the Kubernetes documentation.


    Kubernetes will maintain the number of replicas specified in the manifest. If this were a production deployment and you wanted Kubernetes to horizontally scale the number of replicas or vertically scale the compute resources for the pods, you'd need to use the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler.

  4. View the details of the deployed service.

    kubectl -n <my-namespace> describe service <my-service>

    Abbreviated output

    Name: my-service Namespace: my-namespace Labels: app=my-app Annotations: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"my-app"},"name":"my-service","namespace":"my-namespace"}... Selector: app=my-app Type: ClusterIP IP: Port: <unset> 80/TCP TargetPort: 80/TCP ...

    In the output, the value for IP: is a unique IP address that can be reached from any pod within the cluster.

  5. View the details of one of the pods that was deployed.

    kubectl -n <my-namespace> describe pod <my-deployment-776d8f8fd8-78w66>

    Abbreviated output

    Name: my-deployment-776d8f8fd8-78w66 Namespace: my-namespace Priority: 0 Node: ... IP: IPs: IP: Controlled By: ReplicaSet/my-deployment-776d8f8fd8 ... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m20s default-scheduler Successfully assigned my-namespace/my-deployment-776d8f8fd8-78w66 to ...

    In the output, the value for IP: is a unique IP that is assigned to the pod from the CIDR block assigned to the subnet that the node is in, by default. If you'd prefer that pods be assigned IP addresses from different CIDR blocks than the subnet that the node is in, you can change the default behavior. For more information, see CNI custom networking. You can also see that the Kubernetes scheduler scheduled the pod on the node with the IP address

  6. Execute a shell on one of the pods by replacing the <value> below with a value returned for one of your pods in step 3.

    kubectl exec -it <my-deployment-776d8f8fd8-78w66> -n <my-namespace> -- /bin/bash
  7. View the DNS resolver configuration file.

    cat /etc/resolv.conf


    nameserver search my-namespace.svc.cluster.local svc.cluster.local cluster.local us-west-2.compute.internal options ndots:5

    In the previous output, the value for nameserver is the cluster's nameserver and is automatically assigned as the name server for any pod deployed to the cluster.

  8. Disconnect from the pod by typing exit.

  9. Remove the sample service, deployment, pods, and namespace.

    kubectl delete namespace <my-namespace>