AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the Amazon EC2 Container Service UpdateService API operation.

Syntax

Update-ECSService
-Cluster <String>
-Alarms_AlarmName <String[]>
-AwsvpcConfiguration_AssignPublicIp <AssignPublicIp>
-AvailabilityZoneRebalancing <AvailabilityZoneRebalancing>
-DeploymentConfiguration_BakeTimeInMinute <Int32>
-CapacityProviderStrategy <CapacityProviderStrategyItem[]>
-DesiredCount <Int32>
-Alarms_Enable <Boolean>
-DeploymentCircuitBreaker_Enable <Boolean>
-ServiceConnectConfiguration_Enabled <Boolean>
-EnableECSManagedTag <Boolean>
-EnableExecuteCommand <Boolean>
-ForceNewDeployment <Boolean>
-HealthCheckGracePeriodSecond <Int32>
-DeploymentConfiguration_LifecycleHook <DeploymentLifecycleHook[]>
-LoadBalancer <LoadBalancer[]>
-LogConfiguration_LogDriver <LogDriver>
-DeploymentConfiguration_MaximumPercent <Int32>
-DeploymentConfiguration_MinimumHealthyPercent <Int32>
-ServiceConnectConfiguration_Namespace <String>
-LogConfiguration_Option <Hashtable>
-PlacementConstraint <PlacementConstraint[]>
-PlacementStrategy <PlacementStrategy[]>
-PlatformVersion <String>
-PropagateTag <PropagateTags>
-Alarms_Rollback <Boolean>
-DeploymentCircuitBreaker_Rollback <Boolean>
-LogConfiguration_SecretOption <Secret[]>
-AwsvpcConfiguration_SecurityGroup <String[]>
-Service <String>
-ServiceRegistry <ServiceRegistry[]>
-ServiceConnectConfiguration_Service <ServiceConnectService[]>
-DeploymentConfiguration_Strategy <DeploymentStrategy>
-AwsvpcConfiguration_Subnet <String[]>
-TaskDefinition <String>
-DeploymentController_Type <DeploymentControllerType>
-VolumeConfiguration <ServiceVolumeConfiguration[]>
-VpcLatticeConfiguration <VpcLatticeConfiguration[]>
-Select <String>
-Force <SwitchParameter>
-ClientConfig <AmazonECSConfig>

Description

Modifies the parameters of a service. On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. For services using the rolling update (ECS) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. You can update your volume configurations and trigger a new deployment. volumeConfigurations is only supported for REPLICA service and not DAEMON service. If you leave volumeConfigurationsnull, it doesn't trigger a new deployment. For more information on volumes, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. For services using the blue/green (CODE_DEPLOY) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference. For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet. You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. If you have updated the container image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy. If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest), you don't need to create a new revision of your task definition. You can update the service using the forceNewDeployment option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start. You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.
  • If minimumHealthyPercent is below 100%, the scheduler can ignore desiredCount temporarily during a deployment. For example, if desiredCount is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer.
  • The maximumPercent parameter represents an upper limit on the number of running tasks during a deployment. You can use it to define the deployment batch size. For example, if desiredCount is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout. After this, SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent. When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.
  • Determine which of the container instances in your cluster can support your service's task definition. For example, they have the required CPU, memory, ports, and container instance attributes.
  • By default, the service scheduler attempts to balance tasks across Availability Zones in this manner even though you can choose a different placement strategy.
    • Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
    • Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
  • Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
  • Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.

Parameters

-Alarms_AlarmName <String[]>
One or more CloudWatch alarm names. Use a "," to separate the alarms. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_Alarms_AlarmNames
-Alarms_Enable <Boolean>
Determines whether to use the CloudWatch alarm option in the service deployment process.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_Alarms_Enable
-Alarms_Rollback <Boolean>
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_Alarms_Rollback
-AvailabilityZoneRebalancing <AvailabilityZoneRebalancing>
Indicates whether to use Availability Zone rebalancing for the service.For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-AwsvpcConfiguration_AssignPublicIp <AssignPublicIp>
Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value:
  • When you use create-service or update-service, the default is DISABLED.
  • When the service deploymentController is ECS, the value must be DISABLED.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesNetworkConfiguration_AwsvpcConfiguration_AssignPublicIp
-AwsvpcConfiguration_SecurityGroup <String[]>
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.All specified security groups must be from the same VPC. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesNetworkConfiguration_AwsvpcConfiguration_SecurityGroups
-AwsvpcConfiguration_Subnet <String[]>
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.All specified subnets must be from the same VPC. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesNetworkConfiguration_AwsvpcConfiguration_Subnets
-CapacityProviderStrategy <CapacityProviderStrategyItem[]>
The details of a capacity provider strategy. You can set a capacity provider when you create a cluster, run a task, or update a service.When you use Fargate, the capacity providers are FARGATE or FARGATE_SPOT.When you use Amazon EC2, the capacity providers are Auto Scaling groups.You can change capacity providers for rolling deployments and blue/green deployments.The following list provides the valid transitions:
  • Update the Fargate launch type to an Auto Scaling group capacity provider.
  • Update the Amazon EC2 launch type to a Fargate capacity provider.
  • Update the Fargate capacity provider to an Auto Scaling group capacity provider.
  • Update the Amazon EC2 capacity provider to a Fargate capacity provider.
  • Update the Auto Scaling group or Fargate capacity provider back to the launch type.Pass an empty list in the capacityProviderStrategy parameter.
For information about Amazon Web Services CDK considerations, see Amazon Web Services CDK considerations.This parameter doesn't trigger a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ClientConfig <AmazonECSConfig>
Amazon.PowerShell.Cmdlets.ECS.AmazonECSClientCmdlet.ClientConfig
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Cluster <String>
The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed.You can't change the cluster name.
Required?False
Position?1
Accept pipeline input?True (ByValue, ByPropertyName)
-DeploymentCircuitBreaker_Enable <Boolean>
Determines whether to use the deployment circuit breaker logic for the service.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_DeploymentCircuitBreaker_Enable
-DeploymentCircuitBreaker_Rollback <Boolean>
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_DeploymentCircuitBreaker_Rollback
-DeploymentConfiguration_BakeTimeInMinute <Int32>
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.You must provide this parameter when you use the BLUE_GREEN deployment strategy.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_BakeTimeInMinutes
-DeploymentConfiguration_LifecycleHook <DeploymentLifecycleHook[]>
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesDeploymentConfiguration_LifecycleHooks
-DeploymentConfiguration_MaximumPercent <Int32>
If a service is using the rolling update (ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.If a service is using either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.You can't specify a custom maximumPercent value for a service that uses either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types and has tasks that use the EC2 launch type.If the service uses either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DeploymentConfiguration_MinimumHealthyPercent <Int32>
If a service is using the rolling update (ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that do not use a load balancer, the following should be noted:
  • A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
  • If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
  • If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
  • If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
  • If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.If a service is using either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.You can't specify a custom minimumHealthyPercent value for a service that uses either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types and has tasks that use the EC2 launch type.If a service is using either the blue/green (CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DeploymentConfiguration_Strategy <DeploymentStrategy>
The deployment strategy for the service. Choose from these valid values:
  • ROLLING - When you create a service which uses the rolling update (ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
  • BLUE_GREEN - A blue/green deployment strategy (BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DeploymentController_Type <DeploymentControllerType>
The deployment controller type to use.The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
  • ECSWhen you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
    • ROLLING: When you create a service which uses the rolling update (ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.Rolling update deployments are best suited for the following scenarios:
      • Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
      • Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
      • Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
      • No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
      • Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
      • No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
      • Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
      • Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
      Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
    • BLUE_GREEN: A blue/green deployment strategy (BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.Amazon ECS blue/green deployments are best suited for the following scenarios:
      • Service validation: When you need to validate new service revisions before directing production traffic to them
      • Zero downtime: When your service requires zero-downtime deployments
      • Instant roll back: When you need the ability to quickly roll back if issues are detected
      • Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
  • ExternalUse a third-party deployment controller.
  • Blue/green deployment (powered by CodeDeploy)CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DesiredCount <Int32>
The number of instantiations of the task to place and keep running in your service.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-EnableECSManagedTag <Boolean>
Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesEnableECSManagedTags
-EnableExecuteCommand <Boolean>
If true, this enables execute command functionality on all task containers.If you do not want to override the value that was set when the service was created, you can set this to null when performing this action.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ForceNewDeployment <Boolean>
Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest) or to roll Fargate tasks onto a newer platform version.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-HealthCheckGracePeriodSecond <Int32>
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of 0 is used. If you don't use any of the health checks, then healthCheckGracePeriodSeconds is unused.If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesHealthCheckGracePeriodSeconds
-LoadBalancer <LoadBalancer[]>
You must have a service-linked role when you update this propertyA list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group.For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using CreateDeployment through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. You can remove existing loadBalancers by passing an empty list.This parameter triggers a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesLoadBalancers
-LogConfiguration_LogDriver <LogDriver>
The log driver to use for the container.For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesServiceConnectConfiguration_LogConfiguration_LogDriver
-LogConfiguration_Option <Hashtable>
The configuration options to send to the log driver.The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: NoSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group.
awslogs-region
Required: YesSpecify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: YesMake sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: NoThis option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.For more information, see awslogs-datetime-format.You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
awslogs-multiline-pattern
Required: NoThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.For more information, see awslogs-multiline-pattern.This option is ignored if awslogs-datetime-format is also configured.You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
The following options apply to all supported log drivers.
mode
Required: NoValid values: non-blocking | blockingThis option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:
  • Set the mode option in your container definition's logConfiguration as blocking.
  • Set the defaultLogDriverMode account setting to blocking.
max-buffer-size
Required: NoDefault value: 10mWhen non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream. When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}' Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesServiceConnectConfiguration_LogConfiguration_Options
-LogConfiguration_SecretOption <Secret[]>
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesServiceConnectConfiguration_LogConfiguration_SecretOptions
-PlacementConstraint <PlacementConstraint[]>
An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array.You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.This parameter doesn't trigger a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesPlacementConstraints
-PlacementStrategy <PlacementStrategy[]>
The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object.You can specify a maximum of five strategy rules for each service.This parameter doesn't trigger a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PlatformVersion <String>
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.This parameter triggers a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PropagateTag <PropagateTags>
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.This parameter doesn't trigger a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesPropagateTags
-Select <String>
Use the -Select parameter to control the cmdlet output. The default value is 'Service'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.ECS.Model.UpdateServiceResponse). Specifying the name of a property of type Amazon.ECS.Model.UpdateServiceResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Service <String>
The name of the service to update.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServiceConnectConfiguration_Enabled <Boolean>
Specifies whether to use Service Connect with this service.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServiceConnectConfiguration_Namespace <String>
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServiceConnectConfiguration_Service <ServiceConnectService[]>
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesServiceConnectConfiguration_Services
-ServiceRegistry <ServiceRegistry[]>
You must have a service-linked role when you update this property.For more information about the role see the CreateService request parameter role. The details for the service discovery registries to assign to this service. For more information, see Service Discovery.When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running.You can remove existing serviceRegistries by passing an empty list.This parameter triggers a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesServiceRegistries
-TaskDefinition <String>
The family and revision (family:revision) or full ARN of the task definition to run in your service. If a revision is not specified, the latest ACTIVE revision is used. If you modify the task definition with UpdateService, Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.This parameter triggers a new service deployment.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-VolumeConfiguration <ServiceVolumeConfiguration[]>
The details of the volume that was configuredAtLaunch. You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition. If set to null, no new deployment is triggered. Otherwise, if this configuration differs from the existing one, it triggers a new deployment.This parameter triggers a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesVolumeConfigurations
-VpcLatticeConfiguration <VpcLatticeConfiguration[]>
An object representing the VPC Lattice configuration for the service being updated.This parameter triggers a new service deployment. Starting with version 4 of the SDK this property will default to null. If no data for this property is returned from the service the property will also be null. This was changed to improve performance and allow the SDK and caller to distinguish between a property not set or a property being empty to clear out a value. To retain the previous SDK behavior set the AWSConfigs.InitializeCollections static property to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesVpcLatticeConfigurations

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAK
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-EndpointUrl <String>
The endpoint to make the call against.Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-ProfileLocation <String>
Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials.If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAWSProfilesLocation, ProfilesLocation
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoredCredentials, AWSProfileName
-Region <Object>
The system name of an AWS region or an AWSRegion instance. This governs the endpoint that will be used when calling service operations. Note that the AWS resources referenced in a call are usually region-specific.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRegionToCall
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesSK, SecretAccessKey
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesST

Outputs

This cmdlet returns an Amazon.ECS.Model.Service object. The service call response (type Amazon.ECS.Model.UpdateServiceResponse) can be returned by specifying '-Select *'.

Examples

Example 1

Update-ECSService -Service my-http-service -TaskDefinition amazon-ecs-sample
This example command updates the `my-http-service` service to use the `amazon-ecs-sample` task definition.

Example 2

Update-ECSService -Service my-http-service -DesiredCount 10
This example command updates the desired count of the `my-http-service` service to 10.

Supported Version

AWS Tools for PowerShell: 2.x.y.z