Creating an Amazon ECS service using the console
Create a service to run and maintain a specified number of instances of a task definition simultaneously in a cluster. If one of your tasks fails or stops, the Amazon ECS service scheduler launches another instance of your task definition to replace it. This helps maintain your desired number of tasks in the service.
Decide on the following configuration parameters before you create a service:
-
There are two compute options that distribute your tasks.
-
A capacity provider strategy causes Amazon ECS to distribute your tasks in one or across multiple capacity providers.
-
A launch type causes Amazon ECS to launch our tasks directly on either Fargate or on the EC2 instances registered to your clusters.
-
-
Task definitions that use the
awsvpc
network mode or services configured to use a load balancer must have a networking configuration. By default, the console selects the default Amazon VPC along with all subnets and the default security group within the default Amazon VPC. -
The placement strategy, The default task placement strategy distributes tasks evenly across Availability Zones.
We recommend that you use Availability Zone rebalancing to help ensure high availability for your service. For more information, see Balancing an Amazon ECS service across Availability Zones.
-
When you use the Launch Type for your service deployment, by default the service starts in the subnets in your cluster VPC.
-
For the capacity provider strategy, the console selects a compute option by default. The following describes the order that the console uses to select a default:
-
If your cluster has a default capacity provider strategy defined, it is selected.
-
If your cluster doesn't have a default capacity provider strategy defined but you have the Fargate capacity providers added to the cluster, a custom capacity provider strategy that uses the
FARGATE
capacity provider is selected. -
If your cluster doesn't have a default capacity provider strategy defined but you have one or more Auto Scaling group capacity providers added to the cluster, the Use custom (Advanced) option is selected and you need to manually define the strategy.
-
If your cluster doesn't have a default capacity provider strategy defined and no capacity providers added to the cluster, the Fargate launch type is selected.
-
-
The default deployment failure detection default options are to use the Amazon ECS deployment circuit breaker option with the Rollback on failures option.
For more information, see How the Amazon ECS deployment circuit breaker detects failures.
-
If you want to use the blue/green deployment option, determine how CodeDeploy moves the applications. The following options are available:
-
CodeDeployDefault.ECSAllAtOnce: Shifts all traffic to the updated Amazon ECS container at once
-
CodeDeployDefault.ECSLinear10PercentEvery1Minutes: Shifts 10 percent of traffic every minute until all traffic is shifted.
-
CodeDeployDefault.ECSLinear10PercentEvery3Minutes: Shifts 10 percent of traffic every 3 minutes until all traffic is shifted.
-
CodeDeployDefault.ECSCanary10Percent5Minutes: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.
-
CodeDeployDefault.ECSCanary10Percent15Minutes: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.
-
-
Decide if you want Amazon ECS to increase or decrease the desired number of tasks in your service automatically. For information see, Automatically scale your Amazon ECS service.
-
If you need an application to connect to other applications that run in Amazon ECS, determine the option that fits your architecture. For more information, see Interconnect Amazon ECS services.
-
When you create a service that uses Amazon ECS circuit breaker, Amazon ECS creates a service deployment and a service revision. These resources allow you to view detailed information about the service history. For more information, see View service history using Amazon ECS service deployments.
For information about how to create a service using the AWS CLI, see create-service in the AWS Command Line Interface Reference.
For information about how to create a service using AWS CloudFormation, see AWS::ECS::Service in the AWS CloudFormation User Guide.
Create a service with the default options
You can use the console to quickly create and deploy a service. The service has the following configuration:
-
Deploys in the VPC and subnets associated with your cluster
-
Deploys one task
-
Uses the rolling deployment
-
Uses the capacity provider strategy with your default capacity provider
-
Uses the deployment circuit breaker to detect failures and sets the option to automatically roll back the deployment on failure
To deploy a service using the default parameters follow these steps.
To create a service (Amazon ECS console)
Open the console at https://console.aws.amazon.com/ecs/v2
. -
In the navigation page, choose Clusters.
-
On the Clusters page, choose the cluster to create the service in.
-
From the Services tab, choose Create.
-
Under Deployment configuration, specify how your application is deployed.
-
For Application type, choose Service.
-
For Task definition, choose the task definition family and revision to use.
-
For Service name, enter a name for your service.
-
For Desired tasks, enter the number of tasks to launch and maintain in the service.
-
-
(Optional) To help identify your service and tasks, expand the Tags section, and then configure your tags.
To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select Turn on Amazon ECS managed tags, and then select Task definitions.
To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select Turn on Amazon ECS managed tags, and then select Service.
Add or remove a tag.
-
[Add a tag] Choose Add tag, and then do the following:
-
For Key, enter the key name.
-
For Value, enter the key value.
-
-
[Remove a tag] Next to the tag, choose Remove tag.
-
Create a service using defined parameters
To create a service by using defined parameters, follow these steps.
To create a service (Amazon ECS console)
Open the console at https://console.aws.amazon.com/ecs/v2
. -
Determine the resource from where you launch the service.
To start a service from Steps Clusters
-
On the Clusters page, select the cluster to create the service in.
-
From the Services tab, choose Create.
Launch type -
On the Task definitions page, select the option button next to the task definition.
-
On the Deploy menu, choose Create service.
-
-
(Optional) Choose how your tasks are distributed across your cluster infrastructure. Expand Compute configuration, and then choose your option.
Distribution method Steps Capacity provider strategy
-
Under Compute options, choose Capacity provider strategy.
-
Choose a strategy:
-
To use the cluster's default capacity provider strategy, choose Use cluster default.
-
If your cluster doesn't have a default capacity provider strategy, or to use a custom strategy, choose Use custom, Add capacity provider strategy, and then define your custom capacity provider strategy by specifying a Base, Capacity provider, and Weight.
-
Note
To use a capacity provider in a strategy, the capacity provider must be associated with the cluster.
Launch type -
In the Compute options section, select Launch type.
-
For Launch type, choose a launch type.
-
(Optional) When the Fargate launch type is specified, for Platform version, specify the platform version to use. If a platform version isn't specified, the
LATEST
platform version is used.
-
-
To specify how your service is deployed, go to the Deployment configuration section, and then choose your options.
-
For Application type, leave the choice as Service.
-
For Task definition and Revision, choose the task definition family and revision to use.
-
For Service name, enter a name for your service.
-
For Service type, choose the service scheduling strategy.
-
To have the scheduler deploy exactly one task on each active container instance that meets all of the task placement constraints, choose Daemon.
-
To have the scheduler place and maintain the desired number of tasks in your cluster, choose Replica.
-
-
If you chose Replica, for Desired tasks, enter the number of tasks to launch and maintain in the service.
-
If you chose Replica, to have Amazon ECS monitor the distribution of tasks across Availability Zones, and redistribute them when there is an imbalance, under Availability Zone service rebalancing, select Availability Zone service rebalancing.
-
Determine the deployment type for your service. Expand Deployment options, and then specify the following parameters.
Deployment type Steps Rolling update -
For Min running tasks, enter the lower limit on the number of tasks in the service that must remain in the
RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer). For more information, see Deployment configuration. -
For Max running tasks, enter the upper limit on the number of tasks in the service that are allowed in the
RUNNING
orPENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer).
Blue/green deployment -
For Deployment configuration, choose how CodeDeploy routes production traffic to your replacement task set during a deployment.
-
For Service role for CodeDeploy, choose the IAM role the service uses to make API requests to authorized AWS services.
-
-
To configure how Amazon ECS detects and handles deployment failures, expand Deployment failure detection, and then choose your options.
-
To stop a deployment when the tasks cannot start, select Use the Amazon ECS deployment circuit breaker.
To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select Rollback on failures.
-
To stop a deployment based on application metrics, select Use CloudWatch alarm(s). Then, from CloudWatch alarm name, choose the alarms. To create a new alarm, go to the CloudWatch console.
To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select Rollback on failures.
-
-
-
(Optional) To use Service Connect, select Turn on Service Connect, and then specify the following:
-
Under Service Connect configuration, specify the client mode.
-
If your service runs a network client application that only needs to connect to other services in the namespace, choose Client side only.
-
If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose Client and server.
-
-
To use a namespace that is not the default cluster namespace, for Namespace, choose the service namespace.
-
(Optional) Select the Use log collection option to specify a log configuration. For each available log driver, there are log driver options to specify. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner.
The following describes each container log destination in more detail.
-
Amazon CloudWatch – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
-
Amazon Data Firehose – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
-
Amazon Kinesis Data Streams – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
-
Amazon OpenSearch Service – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided.
-
Amazon S3 – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.
-
-
-
(Optional) To use Service Discovery, select Use service discovery, and then specify the following.
-
To use a new namespace, choose Create a new namespace under Configure namespace, and then provide a namespace name and description. To use an existing namespace, choose Select an existing namespace and then choose the namespace that you want to use.
-
Provide Service Discovery service information such as the service's name and description.
-
To have Amazon ECS perform periodic container-level health checks, select Enable Amazon ECS task health propagation.
-
For DNS record type, select the DNS record type to create for your service. Amazon ECS service discovery only supports A and SRV records, depending on the network mode that your task definition specifies. For more information about these record types, see Supported DNS Record Types in the Amazon Route 53 Developer Guide.
-
If the task definition that your service task specifies uses the
bridge
orhost
network mode, only type SRV records are supported. Choose a container name and port combination to associate with the record. -
If the task definition that your service task specifies uses the
awsvpc
network mode, select either the A or SRV record type. If you choose A, skip to the next step. If you choose SRV, specify either the port that the service can be found on or a container name and port combination to associate with the record.
For TTL, enter the time in seconds how long a record set is cached by DNS resolvers and by web browsers.
-
-
-
(Optional) To configure a load balancer for your service, expand Load balancing.
Choose the load balancer.
To use this load balancer Do this Application Load Balancer
-
For Load balancer type, select Application Load Balancer.
-
Choose Create a new load balancer to create a new Application Load Balancer or Use an existing load balancer to select an existing Application Load Balancer.
-
For Load balancer name, enter a unique name.
-
For Choose container to load balance, choose the container that hosts the service.
-
For Listener, enter a port and protocol for the Application Load Balancer to listen for connection requests on. By default, the load balancer will be configured to use port 80 and HTTP.
-
For Target group name, enter a name and a protocol for the target group that the Application Load Balancer routes requests to. By default, the target group routes requests to the first container defined in your task definition.
-
For Degregistration delay, enter the number of seconds for the load balancer to change the target state to
UNUSED
. The default is 300 seconds. -
For Health check path, enter an existing path within your container where the Application Load Balancer periodically sends requests to verify the connection health between the Application Load Balancer and the container. The default is the root directory (
/
). -
For Health check grace period, enter the amount of time (in seconds) that the service scheduler should ignore unhealthy Elastic Load Balancing target health checks.
Network Load Balancer -
For Load balancer type, select Network Load Balancer.
-
For Load Balancer, choose an existing Network Load Balancer.
-
For Choose container to load balance, choose the container that hosts the service.
-
For Target group name, enter a name and a protocol for the target group that the Network Load Balancer routes requests to. By default, the target group routes requests to the first container defined in your task definition.
-
For Degregistration delay, enter the number of seconds for the load balancer to change the target state to
UNUSED
. The default is 300 seconds. -
For Health check path, enter an existing path within your container where the Network Load Balancer periodically sends requests to verify the connection health between the Application Load Balancer and the container. The default is the root directory (
/
). -
For Health check grace period, enter the amount of time (in seconds) that the service scheduler should ignore unhealthy Elastic Load Balancing target health checks.
-
-
(Optional) To use VPC Lattice, select Turn on VPC Lattice, and then specify the following:
-
For Infrastructure role, choose the infrastructure role.
If you haven't created a role, choose Create infrastructure role.
-
Under Target Groups choose the target group or groups. You need to choose at least one target group and can have a maximum of five. Choose Add target group to add additional target groups. Choose the Port name, Protocol, and Port for each target group you chose.
To delete a target group, choose Remove.
Note
-
If you want to add existing target groups, you need use the AWS CLI. For instructions on how to add target groups using the AWS CLI, see register-targets in the AWS Command Line Interface Reference.
-
While a VPC Lattice service can have multiple target groups, each target group can only be added to one service.
-
-
To complete the VPC Lattice configuration, by including your new target groups in the listener default action or in the rules of an existing VPC Lattice service in the VPC Lattice console. For more information, see Listener rules for your VPC Lattice service.
-
-
(Optional) To configure service Auto Scaling, expand Service auto scaling, and then specify the following parameters.
-
To use service auto scaling, select Service auto scaling.
-
For Minimum number of tasks, enter the lower limit of the number of tasks for service auto scaling to use. The desired count will not go below this count.
-
For Maximum number of tasks, enter the upper limit of the number of tasks for service auto scaling to use. The desired count will not go above this count.
-
Choose the policy type. Under Scaling policy type, choose one of the following options.
To use this policy type Do this Target tracking
-
For Scaling policy type, choose Target tracking.
-
For Policy name, enter the name of the policy.
-
For ECS service metric, select one of the following metrics.
-
ECSServiceAverageCPUUtilization – Average CPU utilization of the service.
-
ECSServiceAverageMemoryUtilization – Average memory utilization of the service.
-
ALBRequestCountPerTarget – Number of requests completed per target in an Application Load Balancer target group.
-
-
For Target value, enter the value the service maintains for the selected metric.
-
For Scale-out cooldown period, enter the amount of time, in seconds, after a scale-out activity (add tasks) that must pass before another scale-out activity can start.
-
For Scale-in cooldown period, enter the amount of time, in seconds, after a scale-in activity (remove tasks) that must pass before another scale-in activity can start.
-
To prevent the policy from performing a scale-in activity, select Turn off scale-in.
-
• (Optional) Select Turn off scale-in if you want your scaling policy to scale out for increased traffic but don’t need it to scale in when traffic decreases.
Step scaling -
For Scaling policy type, choose Step scaling.
-
For Policy name, enter the policy name.
-
For Alarm name, enter a unique name for the alarm.
-
For Amazon ECS service metric, choose the metric to use for the alarm.
-
For Statistic, choose the alarm statistic.
-
For Period, choose the period for the alarm.
-
For Alarm condition, choose how to compare the selected metric to the defined threshold.
-
For Threshold to compare metrics and Evaluation period to initiate alarm, enter the threshold used for the alarm and how long to evaluate the threshold.
-
Under Scaling actions, do the following:
-
For Action, select whether to add, remove, or set a specific desired count for your service.
-
If you chose to add or remove tasks, for Value, enter the number of tasks (or percent of existing tasks) to add or remove when the scaling action is initiated. If you chose to set the desired count, enter the number of tasks. For Type, select whether the Value is an integer or a percent value of the existing desired count.
-
For Lower bound and Upper bound, enter the lower boundary and upper boundary of your step scaling adjustment. By default, the lower bound for an add policy is the alarm threshold and the upper bound is positive (+) infinity. By default, the upper bound for a remove policy is the alarm threshold and the lower bound is negative (-) infinity.
-
(Optional) Add additional scaling options. Choose Add new scaling action, and then repeat the Scaling actions steps.
-
For Cooldown period, enter the amount of time, in seconds, to wait for a previous scaling activity to take effect. For an add policy, this is the time after a scale-out activity that the scaling policy blocks scale-in activities and limits how many tasks can be scale out at a time. For a remove policy, this is the time after a scale-in activity that must pass before another scale-in activity can start.
-
-
-
-
(Optional) To use a task placement strategy other than the default, expand Task Placement, and then choose from the following options.
For more information, see How Amazon ECS places tasks on container instances.
-
AZ Balanced Spread – Distribute tasks across Availability Zones and across container instances in the Availability Zone.
-
AZ Balanced BinPack – Distribute tasks across Availability Zones and across container instances with the least available memory.
-
BinPack – Distribute tasks based on the least available amount of CPU or memory.
-
One Task Per Host – Place, at most, one task from the service on each container instance.
-
Custom – Define your own task placement strategy.
If you chose Custom, define the algorithm for placing tasks and the rules that are considered during task placement.
-
Under Strategy, for Type and Field, choose the algorithm and the entity to use for the algorithm.
You can enter a maximum of 5 strategies.
-
Under Constraint, for Type and Expression, choose the rule and attribute for the constraint.
For example, to set the constraint to place tasks on T2 instances, for the Expression, enter attribute:ecs.instance-type =~ t2.*.
You can enter a maximum of 10 constraints.
-
-
If your task definition uses the
awsvpc
network mode, expand Networking. Use the following steps to specify a custom configuration.-
For VPC, select the VPC to use.
-
For Subnets, select one or more subnets in the VPC that the task scheduler considers when placing your tasks.
Important
Only private subnets are supported for the
awsvpc
network mode. Tasks don't receive public IP addresses. Therefore, a NAT gateway is required for outbound internet access, and inbound internet traffic is routed through a load balancer. -
For Security group, you can either select an existing security group or create a new one. To use an existing security group, select the security group and move to the next step. To create a new security group, choose Create a new security group. You must specify a security group name, description, and then add one or more inbound rules for the security group.
-
For Public IP, choose whether to auto-assign a public IP address to the elastic network interface (ENI) of the task.
AWS Fargate tasks can be assigned a public IP address when run in a public subnet so they have a route to the internet. EC2 tasks can't be assigned a public IP using this field. For more information, see Amazon ECS task networking options for the Fargate launch type and Allocate a network interface for an Amazon ECS task.
-
-
If your task uses a data volume that's compatible with configuration at deployment, you can configure the volume by expanding Volume.
The volume name and volume type are configured when you create a task definition revision and can't be changed when creating a service. To update the volume name and type, you must create a new task definition revision and create a service by using the new revision.
To configure this volume type Do this Amazon EBS
-
For EBS volume type, choose the type of EBS volume that you want to attach to your task.
-
For Size (GiB), enter a valid value for the volume size in gibibytes (GiB). You can specify a minimum of 1 GiB and a maximum of 16,384 GiB volume size. This value is required unless you provide a snapshot ID.
-
For IOPS, enter the maximum number of input/output operations (IOPS) that the volume should provide. This value is configurable only for
io1
,io2
, andgp3
volume types. -
For Throughput (MiB/s), enter the throughput that the volume should provide, in mebibytes per second (MiBps, or MiB/s). This value is configurable only for the
gp3
volume type. -
For Snapshot ID, choose an existing Amazon EBS volume snapshot or enter the ARN of a snapshot if you want to create a volume from a snapshot. You can also create a new, empty volume by not choosing or entering a snapshot ID.
-
For File system type, choose the type of file system that will be used for data storage and retrieval on the volume. You can choose either the operating system default or a specific file system type. The default for Linux is
XFS
. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start. -
For Infrastructure role, choose an IAM role with the necessary permissions that allow Amazon ECS to manage Amazon EBS volumes for tasks. You can attach the
AmazonECSInfrastructureRolePolicyForVolumes
managed policy to the role, or you can use the policy as a guide to create and attach an your own policy with permissions that meet your specific needs. For more information about the necessary permissions, see Amazon ECS infrastructure IAM role. -
For Encryption, choose Default if you want to use the Amazon EBS encryption by default settings. If your account has Encryption by default configured, the volume will be encrypted with the AWS Key Management Service (AWS KMS) key that's specified in the setting. If you choose Default and Amazon EBS default encryption isn't turned on, the volume will be unencrypted.
If you choose Custom, you can specify an AWS KMS key of your choice for volume encryption.
If you choose None, the volume will be unencrypted unless you have encryption by default configured, or if you create a volume from an encrypted snapshot.
-
If you've chosen Custom for Encryption, you must specify the AWS KMS key that you want to use. For KMS key, choose an AWS KMS key or enter a key ARN. If you choose to encrypt your volume by using a symmetric customer managed key, make sure that you have the right permissions defined in your AWS KMS key policy. For more information, see Data encryption for Amazon EBS volumes.
-
(Optional) Under Tags, you can add tags to your Amazon EBS volume by either propagating tags from the task definition or service, or by providing your own tags.
If you want to propagate tags from the task definition, choose Task definition for Propagate tags from. If you want to propagate tags from the service, choose Service for Propagate tags from. If you choose Do not propagate, or if you don't choose a value, the tags aren't propagated.
If you want to provide your own tags, choose Add tag and then provide the key and value for each tag you add.
For more information about tagging Amazon EBS volumes, see Tagging Amazon EBS volumes.
-
-
(Optional) To help identify your service and tasks, expand the Tags section, and then configure your tags.
To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select Turn on Amazon ECS managed tags, and then for Propagate tags from, choose Task definitions.
To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select Turn on Amazon ECS managed tags, and then for Propagate tags from, choose Service.
Add or remove a tag.
-
[Add a tag] Choose Add tag, and then do the following:
-
For Key, enter the key name.
-
For Value, enter the key value.
-
-
[Remove a tag] Next to the tag, choose Remove tag.
-
-
Choose Create.
Next steps
Track your deployment and view your service history for services that Amazon ECS circuit breaker. For more information, see View service history using Amazon ECS service deployments.