Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the
desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the
UpdateService action.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
Amazon Elastic Inference (EI) is no longer available to customers.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see
Service load balancing in the
Amazon Elastic Container Service Developer Guide.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service.
volumeConfigurations is only supported for REPLICA service and not DAEMON service. For more infomation, see
Amazon EBS volumes in the
Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in the
RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the
RUNNING state and are reported as healthy by the load balancer.
There are two service scheduler strategies available:
- REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
- DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. You can use
UpdateService. The default value for a replica service for
minimumHealthyPercent is 100%. The default value for a daemon service for
minimumHealthyPercent is 0%.
If a service uses the
ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the
RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the
DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the
RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the
RUNNING state and reported as healthy by the load balancer, tasks for services that
do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%.
If a service uses the
ECS deployment controller, the
maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the
RUNNING or
PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the
DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service uses either the
CODE_DEPLOY or
EXTERNAL deployment controller types and tasks that use the EC2 launch type, the
minimum healthy percent and
maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the
RUNNING state. This is while the container instances are in the
DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service.
When creating a service that uses the
EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the
CreateTaskSet. For more information, see
Amazon ECS deployment types in the
Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see
Amazon ECS task placement in the
Amazon Elastic Container Service Developer Guide