Updating an Amazon ECS service using the console - Amazon Elastic Container Service

Updating an Amazon ECS service using the console

You can update the task definition, desired task count, capacity provider strategy, platform version, and deployment configuration; or any combination of these.The current service configuration is pre-populated.

For information about how to update the blue/green deployment configuration, see Updating an Amazon ECS blue/green deployment using the console.

Consider the following when you use the console:

If you want to temporarily stop your service, set Desired tasks to 0. Then, when you are ready to start the service, update the service with the original Desired tasks count.

Consider the following when you use the console:

  • You must use the AWS Command Line Interface to update a service that uses any of the following parameters:

    • Blue/green deployments

    • Service Discovery – You can only view your Service Discovery configuration.

    • Tracking policy with a custom metric

    • Update Service – You cannot update the awsvpc network configuration and the health check grace period.

    For information about how to update a service using the AWS CLI, see update-service in the AWS Command Line Interface Reference.

  • If you are changing the ports used by containers in a task definition, you might need to update the security groups for the container instances to work with the updated ports.

  • Amazon ECS does not automatically update the security groups associated with Elastic Load Balancing load balancers or Amazon ECS container instances.

  • If your service uses a load balancer, the load balancer configuration defined for your service when it was created cannot be changed using the console. You can instead use the AWS CLI or SDK to modify the load balancer configuration. For information about how to modify the configuration, see UpdateService in the Amazon Elastic Container Service API Reference.

  • If you update the task definition for the service, the container name and container port that are specified in the load balancer configuration must remain in the task definition.

You can update an existing service to change some of the service configuration parameters, such as the number of tasks that are maintained by a service, which task definition is used by the tasks, or if your tasks are using the Fargate launch type, you can change the platform version your service uses. A service using a Linux platform version cannot be updated to use a Windows platform version and vice versa. If you have an application that needs more capacity, you can scale up your service. If you have unused capacity to scale down, you can reduce the number of desired tasks in your service and free up resources.

If you want to use an updated container image for your tasks, you can create a new task definition revision with that image and deploy it to your service by using the force new deployment option in the console.

The service scheduler uses the minimum healthy percent and maximum percent parameters (in the deployment configuration for the service) to determine the deployment strategy.

If a service is using the rolling update (ECS) deployment type, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer). The parameter also applies while any container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Use this parameter to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50 percent, the scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the RUNNING state. Tasks for services that do use a load balancer are considered healthy if they are in the RUNNING state and they are reported as healthy by the load balancer. The default value for minimum healthy percent is 100 percent.

If a service is using the rolling update (ECS) deployment type, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the PENDING, RUNNING, or STOPPING state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer). The parameter also applies while any container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Use this parameter to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200 percent, the scheduler may start four new tasks before stopping the four older tasks. That is provided that the cluster resources required to do this are available. The default value for the maximum percent is 200 percent.

When the service scheduler replaces a task during an update, the service first removes the task from the load balancer (if used) and waits for the connections to drain. Then, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM signal and a 30-second timeout, after which SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM signal gracefully and exits within 30 seconds from receiving it, no SIGKILL signal is sent. The service scheduler starts and stops tasks as defined by your minimum healthy percent and maximum percent settings.

The service scheduler also replaces tasks determined to be unhealthy after a container health check or a load balancer target group health check fails. This replacement depends on the maximumPercent and desiredCount service definition parameters. If a task is marked unhealthy, the service scheduler will first start a replacement task. Then, the following happens.

  • If the replacement task has a health status of HEALTHY, the service scheduler stops the unhealthy task

  • If the replacement task has a health status of UNHEALTHY, the scheduler will stop either the unhealthy replacement task or the existing unhealthy task to get the total task count to equal desiredCount.

If the maximumPercent parameter limits the scheduler from starting a replacement task first, the scheduler will stop an unhealthy task one at a time at random to free up capacity, and then start a replacement task. The start and stop process continues until all unhealthy tasks are replaced with healthy tasks. Once all unhealthy tasks have been replaced and only healthy tasks are running, if the total task count exceeds the desiredCount, healthy tasks are stopped at random until the total task count equals desiredCount. For more information about maximumPercent and desiredCount, see Service definition parameters.

Important

If you are changing the ports used by containers in a task definition, you may need to update the security groups for the container instances to work with the updated ports.

If you update the task definition for the service, the container name and container port that were specified when the service was created must remain in the task definition.

Amazon ECS does not automatically update the security groups associated with Elastic Load Balancing load balancers or Amazon ECS container instances.

When you update a service that uses Amazon ECS circuit breaker, Amazon ECS creates a service deployment and a service revision. These resources allow you to view detailed information about the service history. For more information, see View service history using Amazon ECS service deployments.

Procedure

  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. On the Clusters page, choose the cluster.

  3. On the cluster details page, in the Services section, select the check box next to the service, and then choose Update.

  4. To have your service start a new deployment, select Force new deployment.

  5. For Task definition, choose the task definition family and revision.

    Important

    The console validates that the selected task definition family and revision are compatible with the defined compute configuration. If you receive a warning, verify both your task definition compatibility and the compute configuration that you selected.

  6. For Desired tasks, enter the number of tasks that you want to run for the service.

  7. For Min running tasks, enter the lower limit on the number of tasks in the service that must remain in the RUNNING state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer). For more information, see Deployment configuration.

  8. For Max running tasks, enter the upper limit on the number of tasks in the service that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer).

  9. To configure how Amazon ECS detects and handles deployment failures, expand Deployment failure detection, and then choose your options.

    1. To stop a deployment when the tasks cannot start, select Use the Amazon ECS deployment circuit breaker.

      To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select Rollback on failures.

    2. To stop a deployment based on application metrics, select Use CloudWatch alarm(s). Then, from CloudWatch alarm name, choose the alarms. To create a new alarm, go to the CloudWatch console.

      To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select Rollback on failures.

  10. To change the compute options, expand Compute configuration, and then do the following:

    1. For services on AWS Fargate, for Platform version, choose the new version.

    2. For services that use a capacity provider strategy, for Capacity provider strategy, do the following:

      • To add an additional capacity provider, choose Add more. Then, for Capacity provider, choose the capacity provider.

      • To remove a capacity provider, to the right of the capacity provider, choose Remove.

      A service that's using an Auto Scaling group capacity provider can't be updated to use a Fargate capacity provider. A service that's using a Fargate capacity provider can't be updated to use an Auto Scaling group capacity provider.

  11. (Optional) To configure service Auto Scaling, expand Service auto scaling, and then specify the following parameters.

    1. To use service auto scaling, select Service auto scaling.

    2. For Minimum number of tasks, enter the lower limit of the number of tasks for service auto scaling to use. The desired count will not go below this count.

    3. For Maximum number of tasks, enter the upper limit of the number of tasks for service auto scaling to use. The desired count will not go above this count.

    4. Choose the policy type. Under Scaling policy type, choose one of the following options.

      To use this policy type Do this

      Target tracking

      1. For Scaling policy type, choose Target tracking.

      2. For Policy name, enter the name of the policy.

      3. For ECS service metric, select one of the following metrics.

        • ECSServiceAverageCPUUtilization – Average CPU utilization of the service.

        • ECSServiceAverageMemoryUtilization – Average memory utilization of the service.

        • ALBRequestCountPerTarget – Number of requests completed per target in an Application Load Balancer target group.

      4. For Target value, enter the value the service maintains for the selected metric.

      5. For Scale-out cooldown period, enter the amount of time, in seconds, after a scale-out activity (add tasks) that must pass before another scale-out activity can start.

      6. For Scale-in cooldown period, enter the amount of time, in seconds, after a scale-in activity (remove tasks) that must pass before another scale-in activity can start.

      7. To prevent the policy from performing a scale-in activity, select Turn off scale-in.

      8. • (Optional) Select Turn off scale-in if you want your scaling policy to scale out for increased traffic but don’t need it to scale in when traffic decreases.

      Step scaling
      1. For Scaling policy type, choose Step scaling.

      2. For Policy name, enter the policy name.

      3. For Alarm name, enter a unique name for the alarm.

      4. For Amazon ECS service metric, choose the metric to use for the alarm.

      5. For Statistic, choose the alarm statistic.

      6. For Period, choose the period for the alarm.

      7. For Alarm condition, choose how to compare the selected metric to the defined threshold.

      8. For Threshold to compare metrics and Evaluation period to initiate alarm, enter the threshold used for the alarm and how long to evaluate the threshold.

      9. Under Scaling actions, do the following:

        • For Action, select whether to add, remove, or set a specific desired count for your service.

        • If you chose to add or remove tasks, for Value, enter the number of tasks (or percent of existing tasks) to add or remove when the scaling action is initiated. If you chose to set the desired count, enter the number of tasks. For Type, select whether the Value is an integer or a percent value of the existing desired count.

        • For Lower bound and Upper bound, enter the lower boundary and upper boundary of your step scaling adjustment. By default, the lower bound for an add policy is the alarm threshold and the upper bound is positive (+) infinity. By default, the upper bound for a remove policy is the alarm threshold and the lower bound is negative (-) infinity.

        • (Optional) Add additional scaling options. Choose Add new scaling action, and then repeat the Scaling actions steps.

        • For Cooldown period, enter the amount of time, in seconds, to wait for a previous scaling activity to take effect. For an add policy, this is the time after a scale-out activity that the scaling policy blocks scale-in activities and limits how many tasks can be scale out at a time. For a remove policy, this is the time after a scale-in activity that must pass before another scale-in activity can start.

  12. (Optional) To use Service Connect, select Turn on Service Connect, and then specify the following:

    1. Under Service Connect configuration, specify the client mode.

      • If your service runs s network client application that only needs to connect to other services in the namespace, choose Client side only.

      • If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose Client and server.

    2. To use a namespace that is not the default cluster namespace, for Namespace, choose the service namespace.

  13. If your task uses a data volume that's compatible with configuration at deployment, you can configure the volume by expanding Volume.

    The volume name and volume type are configured when you create a task definition revision and can't be changed when you update a service. To update the volume name and type, you must create a new task definition revision and update the service by using the new revision.

    To configure this volume type Do this

    Amazon EBS

    1. For EBS volume type, choose the type of EBS volume that you want to attach to your task.

    2. For Size (GiB), enter a valid value for the volume size in gibibytes (GiB). You can specify a minimum of 1 GiB and a maximum of 16,384 GiB volume size. This value is required unless you provide a snapshot ID.

    3. For IOPS, enter the maximum number of input/output operations (IOPS) that the volume should provide. This value is configurable only for io1,io2, and gp3 volume types.

    4. For Throughput (MiB/s), enter the throughput that the volume should provide, in mebibytes per second (MiBps, or MiB/s). This value is configurable only for the gp3 volume type.

    5. For Snapshot ID, choose an existing Amazon EBS volume snapshot or enter the ARN of a snapshot if you want to create a volume from a snapshot. You can also create a new, empty volume by not choosing or entering a snapshot ID.

    6. For File system type, choose the type of file system that will be used for data storage and retrieval on the volume. You can choose either the operating system default or a specific file system type. The default for Linux is XFS. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.

    7. For Infrastructure role, choose an IAM role with the necessary permissions that allow Amazon ECS to manage Amazon EBS volumes for tasks. You can attach the AmazonECSInfrastructureRolePolicyForVolumes managed policy to the role, or you can use the policy as a guide to create and attach an your own policy with permissions that meet your specific needs. For more information about the necessary permissions, see Amazon ECS infrastructure IAM role.

    8. For Encryption, choose Default if you want to use the Amazon EBS encryption by default settings. If your account has Encryption by default configured, the volume will be encrypted with the AWS Key Management Service (AWS KMS) key that's specified in the setting. If you choose Default and Amazon EBS default encryption isn't turned on, the volume will be unencrypted.

      If you choose Custom, you can specify an AWS KMS key of your choice for volume encryption.

      If you choose None, the volume will be unencrypted unless you have encryption by default configured, or if you create a volume from an encrypted snapshot.

    9. If you've chosen Custom for Encryption, you must specify the AWS KMS key that you want to use. For KMS key, choose an AWS KMS key or enter a key ARN. If you choose to encrypt your volume by using a symmetric customer managed key, make sure that you have the right permissions defined in your AWS KMS key policy. For more information, see Data encryption for Amazon EBS volumes.

    10. (Optional) Under Tags, you can add tags to your Amazon EBS volume by either propagating tags from the task definition or service, or by providing your own tags.

      If you want to propagate tags from the task definition, choose Task definition for Propagate tags from. If you want to propagate tags from the service, choose Service for Propagate tags from. If you choose Do not propagate, or if you don't choose a value, the tags aren't propagated.

      If you want to provide your own tags, choose Add tag and then provide the key and value for each tag you add.

      For more information about tagging Amazon EBS volumes, see Tagging Amazon EBS volumes.

  14. (Optional) To help identify your service, expand the Tags section, and then configure your tags.

    • [Add a tag] Choose Add tag, and do the following:

      • For Key, enter the key name.

      • For Value, enter the key value.

    • [Remove a tag] Next to the tag, choose Remove tag.

  15. Choose Update.

Next steps

Track your deployment and view your service history for services that Amazon ECS circuit breaker. For more information, see View service history using Amazon ECS service deployments.