Scheduling
When applications need to scale out across multiple hosts, you need to be able to easily manage each additional container and node while abstracting the complexity of the underlying infrastructure. In this environment, scheduling refers to the ability to schedule containers on the most appropriate host in a scalable, automated way. In this section, we will review key scheduling aspects of various AWS container services.
-
Amazon ECS provides flexible scheduling capabilities by leveraging the same cluster state information provided by the Amazon ECS APIs to make appropriate placement decision. Amazon ECS provides two scheduler options: service scheduler and the Run Task.
-
Service scheduler is suited for long-running stateless applications, which ensures the required number of tasks are always running (replica) and automatically reschedules if tasks fail. Services also let you deploy updates, such as changing the number of running tasks or the task definition version that should be running. The daemon scheduling strategy deploys exactly one task on each active container instance.
-
The Run Task option is suited for batch jobs, scheduled jobs or a single job that perform work and stop. You can allow the default task placement strategy to distribute tasks randomly across your cluster, which minimizes the chances that a single instance gets a disproportionate number of tasks. Alternately, you can customize how the scheduler places tasks using task placement strategies and constraints. This enables you to optimize placement of containers to be as cost-efficient as possible by ensuring that your tasks are running on the instance types most suitable for your workload.
The binpack placement strategy, for instance, tries to optimize placement of containers to be cost-efficient as possible. Containers in Amazon ECS are part of Amazon ECS tasks, placed on compute instances to leave the least amount of unused CPU or memory. This in turn minimizes the number of computed instances in use, resulting in better resource efficiency. The placement strategies can be supported by placement constraints, which lets you place tasks by constraints like the instance type or the availability zone.
-
-
Amazon EKS: With Amazon EKS, the Kubernetes scheduler (kube-scheduler) is responsible for finding the best node for every newly created pod or any unscheduled pods that have no node assigned. It assigns the pod to the node with the highest ranking based on the filtering and ranking system. If there is more than one node with equal scores, kube-scheduler selects one of these at random. You can constrain a pod so that it can only run-on a set of nodes. The scheduler will automatically find a reasonable placement, but there are some circumstances where you might want to control which node the pod deploys to. For example, to ensure that a pod ends up on a machine with SSD storage attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone. Here are some important concepts about scheduling in Kubernetes:
-
NodeSelector
is the simplest recommended form of node selection constraint. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels. -
Topology spread constraints
are to control how pods are spread across your cluster among failure-domains such as Regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization. -
Node affinity
is a property of pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite; they allow a node to repel a set of pods. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints. -
Pod Priority
indicates the importance of a pod relative to other pods. If a pod can't be scheduled, try to preempt or evict lower priority pods to make scheduling of the pending pod possible.
-
-
With Fargate, App Runner, and Lambda, you don't need to manage how to schedule your containers. Compute is provisioned for you automatically as required, based on the resource requirements you have configured. The containers are automatically scheduled on provisioned compute.