REL07-BP01 Use automation when obtaining or scaling resources
When replacing impaired resources or scaling your workload, automate the process by using managed AWS services, such as Amazon S3 and AWS Auto Scaling. You can also use third-party tools and AWS SDKs to automate scaling.
Managed AWS services include Amazon S3, Amazon CloudFront, AWS Auto Scaling, AWS Lambda, Amazon DynamoDB, AWS Fargate, and Amazon Route 53.
AWS Auto Scaling lets you detect and replace impaired instances. It
also lets you build scaling plans for resources
including Amazon EC2
When scaling EC2 instances, ensure that you use multiple Availability Zones (preferably at least three) and add or remove capacity to maintain balance across these Availability Zones. ECS tasks or Kubernetes pods (when using Amazon Elastic Kubernetes Service) should also be distributed across multiple Availability Zones.
When using AWS Lambda, instances scale automatically. Every time an event notification is received for your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code up to the allocated concurrency. You need to ensure that the necessary concurrency is configured on the specific Lambda, and in your Service Quotas.
Amazon S3 automatically scales to handle high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second.
Configure and use Amazon CloudFront or a trusted content delivery network (CDN). A CDN can provide faster end-user response times and can serve requests for content from cache, therefore reducing the need to scale your workload.
Common anti-patterns:
-
Implementing Auto Scaling groups for automated healing, but not implementing elasticity.
-
Using automatic scaling to respond to large increases in traffic.
-
Deploying highly stateful applications, eliminating the option of elasticity.
Benefits of establishing this best practice: Automation removes the potential for manual error in deploying and decommissioning resources. Automation removes the risk of cost overruns and denial of service due to slow response on needs for deployment or decommissioning.
Level of risk exposed if this best practice is not established: High
Implementation guidance
Configure and use AWS Auto Scaling. This monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, you can setup application scaling for multiple resources across multiple services.
-
-
Configure Auto Scaling on your Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, Amazon Aurora Replicas, and AWS Marketplace appliances as applicable.
-
Managing throughput capacity automatically with DynamoDB Auto Scaling
-
Use service API operations to specify the alarms, scaling policies, warm up times, and cool down times.
-
-
-
-
Use Elastic Load Balancing. Load balancers can distribute load by path or by network connectivity.
-
What is Elastic Load Balancing?
-
Application Load Balancers can distribute load by path.
-
What is an Application Load Balancer?
-
Configure an Application Load Balancer to distribute traffic to different workloads based on the path under the domain name.
-
Application Load Balancers can be used to distribute loads in a manner that integrates with AWS Auto Scaling to manage demand.
-
-
-
Network Load Balancers can distribute load by connection.
-
What is a Network Load Balancer?
-
Configure a Network Load Balancer to distribute traffic to different workloads using TCP, or to have a constant set of IP addresses for your workload.
-
Network Load Balancers can be used to distribute loads in a manner that integrates with AWS Auto Scaling to manage demand.
-
-
-
-
Use a highly available DNS provider. DNS names allow your users to enter names instead of IP addresses to access your workloads and distributes this information to a defined scope, usually globally for users of the workload.
-
Use Amazon Route 53 or a trusted DNS provider.
-
Use Route 53 to manage your CloudFront distributions and load balancers.
-
Determine the domains and subdomains you are going to manage.
-
Create appropriate record sets using ALIAS or CNAME records.
-
-
Use the AWS global network to optimize the path from your users to your applications. AWS Global Accelerator continually monitors the health of your application endpoints and redirects traffic to healthy endpoints in less than 30 seconds.
-
AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.
-
-
Configure and use Amazon CloudFront or a trusted content delivery network (CDN). A content delivery network can provide faster end-user response times and can serve requests for content that may cause unnecessary scaling of your workloads.
-
-
Configure Amazon CloudFront distributions for your workloads, or use a third-party CDN.
-
You can limit access to your workloads so that they are only accessible from CloudFront by using the IP ranges for CloudFront in your endpoint security groups or access policies.
-
-
-
Resources
Related documents: