Creating a service using the console - Amazon Elastic Container Service

Creating a service using the console

You can create a service using the console.

Consider the following when you use the console:

  • There are two compute options that distribute your tasks.

    • A capacity provider strategy causes Amazon ECS to distribute your tasks in one or across multiple capacity providers.

    • A launch type causes Amazon ECS to launch our tasks directly on either Fargate or on the Amazon EC2 instances registered to your clusters.

  • Task definitions that use the awsvpc network mode or services configured to use a load balancer must have a networking configuration. By default, the console selects the default Amazon VPC along with all subnets and the default security group within the default Amazon VPC.

  • The default task placement strategy distributes tasks evenly across Availability Zones.

  • When you use the Launch Type for your service deployment, by default the service starts in the subnets in your cluster VPC.

  • For the capacity provider strategy, the console selects a compute option by default. The following describes the order that the console uses to select a default:

    • If your cluster has a default capacity provider strategy defined, it is selected.

    • If your cluster doesn't have a default capacity provider strategy defined but you have the Fargate capacity providers added to the cluster, a custom capacity provider strategy that uses the FARGATE capacity provider is selected.

    • If your cluster doesn't have a default capacity provider strategy defined but you have one or more Auto Scaling group capacity providers added to the cluster, the Use custom (Advanced) option is selected and you need to manually define the strategy.

    • If your cluster doesn't have a default capacity provider strategy defined and no capacity providers added to the cluster, the Fargate launch type is selected.

  • The default deployment failure detection default options are to use the Amazon ECS deployment circuit breaker option with the Rollback on failures option.

    For more information, see Deployment circuit breaker.

  • If you want to use the blue/green deployment option, determine how CodeDeploy moves the applications. The following options are available:

    • CodeDeployDefault.ECSAllAtOnce: Shifts all traffic to the updated Amazon ECS container at once

    • CodeDeployDefault.ECSLinear10PercentEvery1Minutes: Shifts 10 percent of traffic every minute until all traffic is shifted.

    • CodeDeployDefault.ECSLinear10PercentEvery3Minutes: Shifts 10 percent of traffic every 3 minutes until all traffic is shifted.

    • CodeDeployDefault.ECSCanary10Percent5Minutes: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.

    • CodeDeployDefault.ECSCanary10Percent15Minutes: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.

  • If you need an application to connect to other applications that run in Amazon ECS, determine the option that fits your architecture. For more information, see Interconnecting services.

  • You must use AWS CloudFormation or the AWS Command Line Interface to deploy a service that uses any of the following parameters:

    • Tracking policy with a custom metric

    • Update Service – You cannot update the awsvpc network configuration and the health check grace period.

    For information about how to create a service using the AWS CLI, see create-service in the AWS Command Line Interface Reference.

    For information about how to create a service using AWS CloudFormation, see AWS::ECS::Service in the AWS CloudFormation User Guide.

Quickly create a service

You can use the console to quickly create and deploy a service. The service has the following configuration:

  • Deploys in the VPC and subnets associated with your cluster

  • Deploys one task

  • Uses the rolling deployment

  • Uses the capacity provider strategy with your default capacity provider

  • Uses the deployment circuit breaker to detect failures and sets the option to automatically roll back the deployment on failure

To deploy a service using the default parameters follow these steps.

To create a service (Amazon ECS console)
  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. In the navigation page, choose Clusters.

  3. On the Clusters page, choose the cluster to create the service in.

  4. From the Services tab, choose Create.

  5. Under Deployment configuration, specify how your application is deployed.

    1. For Application type, choose Service.

    2. For Task definition, choose the task definition family and revision to use.

    3. For Service name, enter a name for your service.

    4. For Desired tasks, enter the number of tasks to launch and maintain in the service.

  6. (Optional) To help identify your service and tasks, expand the Tags section, and then configure your tags.

    To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select Turn on Amazon ECS managed tags, and then select Task definitions.

    To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select Turn on Amazon ECS managed tags, and then select Service.

    Add or remove a tag.

    • [Add a tag] Choose Add tag, and then do the following:

      • For Key, enter the key name.

      • For Value, enter the key value.

    • [Remove a tag] Next to the tag, choose Remove tag.

Create a service using defined parameters

To create a service by using defined parameters, follow these steps.

To create a service (Amazon ECS console)
  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. Determine the resource from where you launch the service.

    To start a service from Steps

    Clusters

    1. On the Clusters page, select the cluster to create the service in.

    2. From the Services tab, choose Create.

    Launch type
    1. On the Task definitions page, select the option button next to the task definition.

    2. On the Deploy menu, choose Create service.

  3. (Optional) Choose how your tasks are distributed across your cluster infrastructure. Expand Compute configuration, and then choose your option.

    Distribution method Steps

    Capacity provider strategy

    1. Under Compute options, choose Capacity provider strategy.

    2. Choose a strategy:

      • To use the cluster's default capacity provider strategy, choose Use cluster default.

      • If your cluster doesn't have a default capacity provider strategy, or to use a custom strategy, choose Use custom, Add capacity provider strategy, and then define your custom capacity provider strategy by specifying a Base, Capacity provider, and Weight.

    Note

    To use a capacity provider in a strategy, the capacity provider must be associated with the cluster. For more information about capacity provider strategies, see Amazon ECS capacity providers.

    Launch type
    1. In the Compute options section, select Launch type.

    2. For Launch type, choose a launch type.

    3. (Optional) When the Fargate launch type is specified, for Platform version, specify the platform version to use. If a platform version isn't specified, the LATEST platform version is used.

  4. To specify how your service is deployed, go to the Deployment configuration section, and then choose your options.

    1. For Application type, leave the choice as Service.

    2. For Task definition and Revision, choose the task definition family and revision to use.

    3. For Service name, enter a name for your service.

    4. For Service type, choose the service scheduling strategy.

      • To have the scheduler deploy exactly one task on each active container instance that meets all of the task placement constraints, choose Daemon.

      • To have the scheduler place and maintain the desired number of tasks in your cluster, choose Replica.

      For more information, see Service scheduler concepts.

    5. If you chose Replica, for Desired tasks, enter the number of tasks to launch and maintain in the service.

    6. Determine the deployment type for your service. Expand Deployment options, and then specify the following parameters.

      Deployment type Steps
      Rolling update
      1. For Min running tasks, enter the lower limit on the number of tasks in the service that must remain in the RUNNING state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer). For more information, see Deployment configuration.

      2. For Max running tasks, enter the upper limit on the number of tasks in the service that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer).

      Blue/green deployment
      1. For Deployment configuration, choose how CodeDeploy routes production traffic to your replacement task set during a deployment.

      2. For Service role for CodeDeploy, choose the IAM role the service uses to make API requests to authorized AWS services.

    7. To configure how Amazon ECS detects and handles deployment failures, expand Deployment failure detection, and then choose your options.

      1. To stop a deployment when the tasks cannot start, select Use the Amazon ECS deployment circuit breaker.

        To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select Rollback on failures.

      2. To stop a deployment based on application metrics, select Use CloudWatch alarm(s). Then, from CloudWatch alarm name, choose the alarms. To create a new alarm, go to the CloudWatch console.

        To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select Rollback on failures.

  5. (Optional) To use Service Connect, select Turn on Service Connect, and then specify the following:

    1. Under Service Connect configuration, specify the client mode.

      • If your service runs a network client application that only needs to connect to other services in the namespace, choose Client side only.

      • If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose Client and server.

    2. To use a namespace that is not the default cluster namespace, for Namespace, choose the service namespace.

    3. (Optional) Select the Use log collection option to specify a log configuration. For each available log driver, there are log driver options to specify. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see Using custom log routing.

      The following describes each container log destination in more detail.

      • Amazon CloudWatch – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.

      • Amazon Data Firehose – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.

      • Amazon Kinesis Data Streams – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.

      • Amazon OpenSearch Service – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. For more information, see Forwarding logs to an Amazon OpenSearch Service domain.

      • Amazon S3 – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

  6. (Optional) To use Service Discovery, select Use service discovery, and then specify the following.

    1. To use a new namespace, choose Create a new namespace under Configure namespace, and then provide a namespace name and description. To use an existing namespace, choose Select an existing namespace and then choose the namespace that you want to use.

    2. Provide Service Discovery service information such as the service's name and description.

    3. To have Amazon ECS perform periodic container-level health checks, select Enable Amazon ECS task health propagation.

    4. For DNS record type, select the DNS record type to create for your service. Amazon ECS service discovery only supports A and SRV records, depending on the network mode that your task definition specifies. For more information about these record types, see Supported DNS Record Types in the Amazon Route 53 Developer Guide.

      • If the task definition that your service task specifies uses the bridge or host network mode, only type SRV records are supported. Choose a container name and port combination to associate with the record.

      • If the task definition that your service task specifies uses the awsvpc network mode, select either the A or SRV record type. If you choose A, skip to the next step. If you choose SRV, specify either the port that the service can be found on or a container name and port combination to associate with the record.

      For TTL, enter the time in seconds how long a record set is cached by DNS resolvers and by web browsers.

  7. (Optional) To configure a load balancer for your service, expand Load balancing.

    Choose the load balancer.

    To use this load balancer Do this

    Application Load Balancer

    1. For Load balancer type, select Application Load Balancer.

    2. Choose Create a new load balancer to create a new Application Load Balancer or Use an existing load balancer to select an existing Application Load Balancer.

    3. For Load balancer name, enter a unique name.

    4. For Choose container to load balance, choose the container that hosts the service.

    5. For Listener, enter a port and protocol for the Application Load Balancer to listen for connection requests on. By default, the load balancer will be configured to use port 80 and HTTP.

    6. For Target group name, enter a name and a protocol for the target group that the Application Load Balancer routes requests to. By default, the target group routes requests to the first container defined in your task definition.

    7. For Degregistration delay, enter the number of seconds for the load balancer to change the target state to UNUSED. The default is 300 seconds.

    8. For Health check path, enter an existing path within your container where the Application Load Balancer periodically sends requests to verify the connection health between the Application Load Balancer and the container. The default is the root directory (/).

    9. For Health check grace period, enter the amount of time (in seconds) that the service scheduler should ignore unhealthy Elastic Load Balancing target health checks.

    Network Load Balancer
    1. For Load balancer type, select Network Load Balancer.

    2. For Load Balancer, choose an existing Network Load Balancer.

    3. For Choose container to load balance, choose the container that hosts the service.

    4. For Target group name, enter a name and a protocol for the target group that the Network Load Balancer routes requests to. By default, the target group routes requests to the first container defined in your task definition.

    5. For Degregistration delay, enter the number of seconds for the load balancer to change the target state to UNUSED. The default is 300 seconds.

    6. For Health check path, enter an existing path within your container where the Network Load Balancer periodically sends requests to verify the connection health between the Application Load Balancer and the container. The default is the root directory (/).

    7. For Health check grace period, enter the amount of time (in seconds) that the service scheduler should ignore unhealthy Elastic Load Balancing target health checks.

  8. (Optional) To configure service Auto Scaling, expand Service auto scaling, and then specify the following parameters.

    1. To use service auto scaling, select Service auto scaling.

    2. For Minimum number of tasks, enter the lower limit of the number of tasks for service auto scaling to use. The desired count will not go below this count.

    3. For Maximum number of tasks, enter the upper limit of the number of tasks for service auto scaling to use. The desired count will not go above this count.

    4. Choose the policy type. Under Scaling policy type, choose one of the following options.

      To use this policy type... Do this...

      Target tracking

      1. For Scaling policy type, choose Target tracking.

      2. For Policy name, enter the name of the policy.

      3. For ECS service metric, select one of the following metrics.

        • ECSServiceAverageCPUUtilization – Average CPU utilization of the service.

        • ECSServiceAverageMemoryUtilization – Average memory utilization of the service.

        • ALBRequestCountPerTarget – Number of requests completed per target in an Application Load Balancer target group.

      4. For Target value, enter the value the service maintains for the selected metric.

      5. For Scale-out cooldown period, enter the amount of time, in seconds, after a scale-out activity (add tasks) that must pass before another scale-out activity can start.

      6. For Scale-in cooldown period, enter the amount of time, in seconds, after a scale-in activity (remove tasks) that must pass before another scale-in activity can start.

      7. To prevent the policy from performing a scale-in activity, select Turn off scale-in.

      8. • (Optional) Select Turn off scale-in if you want your scaling policy to scale out for increased traffic but don’t need it to scale in when traffic decreases.

      Step scaling
      1. For Scaling policy type, choose Step scaling.

      2. For Policy name, enter the policy name.

      3. For Alarm name, enter a unique name for the alarm.

      4. For Amazon ECS service metric, choose the metric to use for the alarm.

      5. For Statistic, choose the alarm statistic.

      6. For Period, choose the period for the alarm.

      7. For Alarm condition, choose how to compare the selected metric to the defined threshold.

      8. For Threshold to compare metrics and Evaluation period to initiate alarm, enter the threshold used for the alarm and how long to evaluate the threshold.

      9. Under Scaling actions, do the following:

        • For Action, select whether to add, remove, or set a specific desired count for your service.

        • If you chose to add or remove tasks, for Value, enter the number of tasks (or percent of existing tasks) to add or remove when the scaling action is initiated. If you chose to set the desired count, enter the number of tasks. For Type, select whether the Value is an integer or a percent value of the existing desired count.

        • For Lower bound and Upper bound, enter the lower boundary and upper boundary of your step scaling adjustment. By default, the lower bound for an add policy is the alarm threshold and the upper bound is positive (+) infinity. By default, the upper bound for a remove policy is the alarm threshold and the lower bound is negative (-) infinity.

        • (Optional) Add additional scaling options. Choose Add new scaling action, and then repeat the Scaling actions steps.

        • For Cooldown period, enter the amount of time, in seconds, to wait for a previous scaling activity to take effect. For an add policy, this is the time after a scale-out activity that the scaling policy blocks scale-in activities and limits how many tasks can be scale out at a time. For a remove policy, this is the time after a scale-in activity that must pass before another scale-in activity can start.

  9. (Optional) To use a task placement strategy other than the default, expand Task Placement, and then choose from the following options.

    For more information, see How Amazon ECS places tasks on container instances.

    • AZ Balanced Spread – Distribute tasks across Availability Zones and across container instances in the Availability Zone.

    • AZ Balanced BinPack – Distribute tasks across Availability Zones and across container instances with the least available memory.

    • BinPack – Distribute tasks based on the least available amount of CPU or memory.

    • One Task Per Host – Place, at most, one task from the service on each container instance.

    • Custom – Define your own task placement strategy.

    If you chose Custom, define the algorithm for placing tasks and the rules that are considered during task placement.

    • Under Strategy, for Type and Field, choose the algorithm and the entity to use for the algorithm.

      You can enter a maximum of 5 strategies.

    • Under Constraint, for Type and Expression, choose the rule and attribute for the constraint.

      For example, to set the constraint to place tasks on T2 instances, for the Expression, enter attribute:ecs.instance-type =~ t2.*.

      You can enter a maximum of 10 constraints.

  10. If your task definition uses the awsvpc network mode, expand Networking. Use the following steps to specify a custom configuration.

    1. For VPC, select the VPC to use.

    2. For Subnets, select one or more subnets in the VPC that the task scheduler considers when placing your tasks.

      Important

      Only private subnets are supported for the awsvpc network mode. Tasks don't receive public IP addresses. Therefore, a NAT gateway is required for outbound internet access, and inbound internet traffic is routed through a load balancer.

    3. For Security group, you can either select an existing security group or create a new one. To use an existing security group, select the security group and move to the next step. To create a new security group, choose Create a new security group. You must specify a security group name, description, and then add one or more inbound rules for the security group.

  11. If your task uses a data volume that's compatible with configuration at deployment, you can configure the volume by expanding Volume.

    The volume name and volume type are configured when you create a task definition revision and can't be changed when creating a service. To update the volume name and type, you must create a new task definition revision and create a service by using the new revision.

    To configure this volume type Do this

    Amazon EBS

    1. For EBS volume type, choose the type of EBS volume that you want to attach to your task.

    2. For Size (GiB), enter a valid value for the volume size in gibibytes (GiB). You can specify a minimum of 1 GiB and a maximum of 16,384 GiB volume size. This value is required unless you provide a snapshot ID.

    3. For IOPS, enter the maximum number of input/output operations (IOPS) that the volume should provide. This value is configurable only for io1,io2, and gp3 volume types.

    4. For Throughput (MiB/s), enter the throughput that the volume should provide, in mebibytes per second (MiBps, or MiB/s). This value is configurable only for the gp3 volume type.

    5. For Snapshot ID, choose an existing Amazon EBS volume snapshot or enter the ARN of a snapshot if you want to create a volume from a snapshot. You can also create a new, empty volume by not choosing or entering a snapshot ID.

    6. For File system type, choose the type of file system that will be used for data storage and retrieval on the volume. You can choose either the operating system default or a specific file system type. The default for Linux is XFS. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.

    7. For Infrastructure role, choose an IAM role with the necessary permissions that allow Amazon ECS to manage Amazon EBS volumes for tasks. You can attach the AmazonECSInfrastructureRolePolicyForVolumes managed policy to the role, or you can use the policy as a guide to create and attach an your own policy with permissions that meet your specific needs. For more information about the necessary permissions, see Amazon ECS infrastructure IAM role.

    8. For Encryption, choose Default if you want to use the Amazon EBS encryption by default settings. If your account has Encryption by default configured, the volume will be encrypted with the AWS Key Management Service (AWS KMS) key that's specified in the setting. If you choose Default and Amazon EBS default encryption isn't turned on, the volume will be unencrypted.

      If you choose Custom, you can specify an AWS KMS key of your choice for volume encryption.

      If you choose None, the volume will be unencrypted unless you have encryption by default configured, or if you create a volume from an encrypted snapshot.

    9. If you've chosen Custom for Encryption, you must specify the AWS KMS key that you want to use. For KMS key, choose an AWS KMS key or enter a key ARN. If you choose to encrypt your volume by using a symmetric customer managed key, make sure that you have the right permissions defined in your AWS KMS key policy. For more information, see Data encryption for Amazon EBS volumes.

    10. (Optional) Under Tags, you can add tags to your Amazon EBS volume by either propagating tags from the task definition or service, or by providing your own tags.

      If you want to propagate tags from the task definition, choose Task definition for Propagate tags from. If you want to propagate tags from the service, choose Service for Propagate tags from. If you choose Do not propagate, or if you don't choose a value, the tags aren't propagated.

      If you want to provide your own tags, choose Add tag and then provide the key and value for each tag you add.

      For more information about tagging Amazon EBS volumes, see Tagging Amazon EBS volumes.

  12. (Optional) To help identify your service and tasks, expand the Tags section, and then configure your tags.

    To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select Turn on Amazon ECS managed tags, and then for Propagate tags from, choose Task definitions.

    To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select Turn on Amazon ECS managed tags, and then for Propagate tags from, choose Service.

    Add or remove a tag.

    • [Add a tag] Choose Add tag, and then do the following:

      • For Key, enter the key name.

      • For Value, enter the key value.

    • [Remove a tag] Next to the tag, choose Remove tag.