This whitepaper is for historical reference only. Some content might be outdated and some links might not be available.
Containers services on AWS
AWS is an elastic, secure, flexible, and developer-centric cloud provider, which makes it ideal for container workloads. AWS offers scalable infrastructure, APIs, and SDKs that integrate into the development lifecycle and accentuate the benefits that containers offer. In this section, we will discuss the different options for container deployments using AWS services:
-
AWS App Runner
is a fully managed service that makes it easy to quickly deploy containerized web applications and APIs. You can use an existing container image, container registry, source code repository, or existing CI/CD workflow to quickly run a fully containerized web application. App Runner supports full stack development, with both front-end and back-end web applications that use HTTP and HTTPS protocols. App Runner automatically builds and deploys the web application and load balances traffic with encryption. It monitors the number of concurrent requests sent to your application and automatically adds additional instances based on request volume. When your application receives no incoming requests, App Runner scales the containers down to a CPU throttled instance, which can serve incoming requests within milliseconds. App Runner is ideal when you want to run and scale your application on AWS without configuring or managing infrastructure services. You do not have to configure any orchestrators, set up build pipelines, manage load balancers, or rotate TLS certificates. When you associate a source code repository with App Runner, it can automatically containerize your web application and run it. This makes it the simplest way to build and run your containerized web application on AWS. -
Amazon Elastic Container Service
(Amazon ECS) is a fully managed container orchestration service which provides a convenient way to rapidly launch thousands of containers across a broad range of AWS compute options. You can use your preferred CI/CD and automation tools with Amazon ECS. There is no complexity of managing a control plane, add-ons, and nodes with Amazon ECS. Amazon ECS offers two launch types – Amazon EC2 and AWS Fargate (discussed later). With the Amazon EC2 launch type, Amazon ECS provides an easy lift for your applications that run on VMs. Your Amazon ECS clusters have container instances, which are Amazon EC2 instances running an Amazon ECS container agent. The agent communicates instance and container state information to the cluster manager. The Amazon ECS container agent is included in the Amazon ECS-optimized AMI, but you can also install it on any Amazon EC2 instance that supports the Amazon ECS specification. Your containers are defined in an Amazon ECS task definition that you use to run individual tasks or tasks within an Amazon ECS service. An Amazon ECS service enables you to run and maintain a specified number of tasks simultaneously in a cluster. The task definition can be thought of as a blueprint for your application that you can specify parameters such as the container image to use, which ports to open, the amount of CPU and memory to use with each task or containers within a task, and the IAM role the task should use. Amazon ECS also supports hybrid deployment scenarios. You can manage your containers on-premises using Amazon ECS Anywhere. Additionally, you also have options to deploy containers on Outposts, Local Zones, and Wavelength with Amazon ECS. Since we are focused on container deployments in the cloud, the details of these hybrid options is beyond the scope of this whitepaper. Refer to links in the section for more information on this topic.
-
Amazon Elastic Kubernetes Service
(Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. It provides a natural migration path if you use Kubernetes already and want to continue using it on AWS for your container applications. It provides highly-available and secure clusters and automates key tasks such as security patching, node provisioning, and updates. Amazon EKS runs a single-tenant Kubernetes control plane for each cluster. The control plane infrastructure is not shared across clusters or AWS accounts. Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes, which are responsible for scheduling containers, managing the availability of applications, storing cluster data, and other key operations. You can also use the managed node group option in Amazon EKS to automate the provisioning and lifecycle management of worker nodes that run your pods. Amazon EKS runs upstream Kubernetes, certified conformant for a predictable experience. You can easily migrate any standard Kubernetes application to Amazon EKS without refactoring your code. This allows you to deploy and manage workloads on your Amazon EKS cluster the same way that you would with any other Kubernetes environment. Currently supported versions are listed in the Amazon EKS user guide. To support operational capabilities on Kubernetes clusters, customers can leverage Amazon EKS add-ons, a curated set of software that simplifies management of operational activities on clusters. For details on how to use add-ons, please refer to EKS add-ons. Amazon EKS also supports hybrid deployment scenarios. You can manage your containers on-premises using Amazon EKS Anywhere
. Amazon EKS Anywhere makes use of Amazon EKS Distro, which is the open source distribution of Kubernetes built and maintained by AWS. This is the same distribution which is used in Amazon EKS on AWS. Amazon EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support. Additionally, you have options to deploy Amazon EKS on Outposts, Local Zones, and AWS Wavelength . These hybrid options are beyond the scope of this whitepaper. Refer to links in the section for more information on this topic. -
AWS Fargate
provides a fully managed compute option to run containers for both Amazon ECS and Amazon EKS. Fargate reduces the time spent on configuration, patching, and security. Fargate runs each task or pod in its own kernel, providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. With Fargate, there is no over-provisioning and paying for additional servers. It allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. With the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. For Amazon EKS, Fargate integrates with Kubernetes using controllers that are built by AWS using the extension model provided by Kubernetes. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate compute.
Other options: There are additional container deployment offerings available on AWS, which can be useful based on the nature of your workloads. AWS offers extensive documentation and blog posts available for each of these offerings.
-
AWS Batch
helps you to run batch computing workloads on the AWS Cloud. You can define job definitions that specify which container images to run your jobs, which run as containerized applications on AWS Fargate or Amazon EC2 resources in your compute environment. -
AWS Elastic Beanstalk
supports the deployment of web applications from containers. With containers, you can define your own runtime environment. You can also choose your own platform, programming language, and any application dependencies (such as package managers or tools), which typically aren't supported by other platforms. -
AWS Lambda functions can be packaged and deployed as container images of up to 10 GB in size. This allows you to easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads. Just like functions packaged as ZIP archives, functions deployed as container images benefit from the same operational simplicity, automatic scaling, high availability, and native integrations with many services that you get with Lambda.
-
Amazon Lightsail
is a highly scalable compute and networking resource on which you can deploy, run, and manage containers. When you deploy your images to your Lightsail container service, the service automatically launches and runs your containers in the AWS infrastructure. -
Red Hat OpenShift Service on AWS
(ROSA) can accelerate your application development process if you are presently running containers in OpenShift by leveraging familiar OpenShift APIs and tools for deployments on AWS. ROSA comes with pay-as-you-go hourly and annual billing, a 99.95% SLA, and joint support from AWS and Red Hat.
Your choice of service is driven by your workload properties, the ease of getting started and the amount of control and customization flexibility you want to have. Consider starting on the fully-managed end of the spectrum (App Runner orFargate) and work backwards towards a more self-managed experience based on the demands of your workload. The self-managed experience can go to the extent of even running containers directly on virtual machines with Amazon EC2, without using any AWS managed services. With AWS, you have the flexibility to pick the container deployment that works best for your operational needs without compromising on the benefits of containers.