Amazon Elastic Container Service
Developer Guide (API Version 2014-11-13)

Step Scaling Policies

With step scaling policies, you specify CloudWatch alarms to trigger the scaling process. For example, if you want to scale out when CPU utilization reaches a certain level, create an alarm using the CPUUtilization metric provided by Amazon ECS.

Amazon ECS publishes CloudWatch metrics with your service’s average CPU and memory usage. You can use these service utilization metrics to scale your service up to deal with high demand at peak times, and to scale your service down to reduce costs during periods of low utilization. For more information, see Service Utilization.

For services containing tasks that use the EC2 launch type, you can scale your container instances using CloudWatch alarms. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms. You can also use CloudWatch metrics published by other services, or custom metrics that are specific to your application. For example, a web service could increase the number of tasks based on Elastic Load Balancing metrics such as SurgeQueueLength, and a batch job could increase the number of tasks based on Amazon SQS metrics such as ApproximateNumberOfMessagesVisible.

Step Scaling Concepts

  • The ECS service scheduler respects the desired count at all times, but as long as you have active scaling policies and alarms on a service, Service Auto Scaling could change a desired count that was manually set by you.

  • If a service's desired count is set below its minimum capacity value, and an alarm triggers a scale-out activity, Application Auto Scaling scales the desired count up to the minimum capacity value and then continues to scale out as required, based on the scaling policy associated with the alarm. However, a scale-in activity does not adjust the desired count, because it is already below the minimum capacity value.

  • If a service's desired count is set above its maximum capacity value, and an alarm triggers a scale in activity, Application Auto Scaling scales the desired count down to the maximum capacity value and then continues to scale in as required, based on the scaling policy associated with the alarm. However, a scale- out activity does not adjust the desired count, because it is already above the maximum capacity value.

  • During scaling activities, the actual running task count in a service is the value that Service Auto Scaling uses as its starting point, as opposed to the desired count, which is what processing capacity is supposed to be. This prevents excessive (runaway) scaling that could not be satisfied, for example, if there are not enough container instance resources to place the additional tasks. If the container instance capacity is available later, the pending scaling activity may succeed, and then further scaling activities can continue after the cooldown period.

Amazon ECS Console Experience

The Amazon ECS console's service creation and service update workflows support step scaling policies. The Amazon ECS console handles the ecsAutoscaleRole and policy creation, provided that the IAM user who is using the console has the permissions described in Service Auto Scaling Required IAM Permissions, and that they can create IAM roles and attach policies to them.

When you configure a service to use Service Auto Scaling in the console, your service is automatically registered as a scalable target with Application Auto Scaling so that you can configure scaling policies that scale your service up and down. You can also create and update the scaling policies and CloudWatch alarms that trigger them in the Amazon ECS console.

To create a new ECS service that uses Service Auto Scaling, see Creating a Service.

To update an existing service to use Service Auto Scaling, see Updating a Service.

AWS CLI and SDK Experience

You can configure Service Auto Scaling by using the AWS CLI or the AWS SDKs, but you must observe the following considerations.

  • Service Auto Scaling is made possible by a combination of the Amazon ECS, CloudWatch, and Application Auto Scaling APIs. Services are created and updated with Amazon ECS, alarms are created with CloudWatch, and scaling policies are created with Application Auto Scaling. For more information about these specific API operations, see the Amazon Elastic Container Service API Reference, the Amazon CloudWatch API Reference, and the Application Auto Scaling API Reference. For more information about the AWS CLI commands for these services, see the ecs, cloudwatch, and application-autoscaling sections of the AWS CLI Command Reference.

  • Before your service can use Service Auto Scaling, you must register it as a scalable target with the Application Auto Scaling RegisterScalableTarget API operation.

  • After your ECS service is registered as a scalable target, you can create scaling policies with the Application Auto Scaling PutScalingPolicy API operation to specify what should happen when your CloudWatch alarms are triggered.

  • After you create the scaling policies for your service, you can create the CloudWatch alarms that trigger the scaling events for your service with the CloudWatch PutMetricAlarm API operation.