Creating an Amazon ECS task definition using the console
You create a task definition so that you can define the application that you run as a task or service.
When you create a task definition for the external launch type, you need to create the task
definition using JSON editor and set the requireCapabilities
parameter to
EXTERNAL
.
You can create a task definition by using the console experience, or by specifying a JSON file.
JSON validation
The Amazon ECS console JSON editor validates the following in the JSON file:
-
The file is a valid JSON file.
-
The file doesn't contain any extraneous keys.
-
The file contains the
familyName
parameter. -
There is at least one entry under
containerDefinitions
.
AWS CloudFormation stacks
The following behavior applies to task definitions that were created in the new Amazon ECS console before January 12, 2023.
When you create a task definition, the Amazon ECS console automatically creates a CloudFormation
stack that has a name that begins with ECS-Console-V2-TaskDefinition-
. If you used the
AWS CLI or an AWS SDK to deregister the task definition, then you must manually delete the task
definition stack. For more information, see Deleting a
stack in the AWS CloudFormation User Guide.
Task definitions created after January 12, 2023, do not have a CloudFormation stack automatically created for them.
Procedure
- Amazon ECS console
-
Open the console at https://console.aws.amazon.com/ecs/v2
. -
In the navigation pane, choose Task definitions.
-
On the Create new task definition menu, choose Create new task definition.
-
For Task definition family, specify a unique name for the task definition.
-
For Launch type, choose the application environment. The console default is AWS Fargate (which is serverless). Amazon ECS uses this value to perform validation to ensure that the task definition parameters are valid for the infrastructure type.
-
For Operating system/Architecture, choose the operating system and CPU architecture for the task.
To run your task on a 64-bit ARM architecture, choose Linux/ARM64. For more information, see Runtime platform.
To run your AWS Fargate tasks on Windows containers, choose a supported Windows operating system. For more information, see Operating Systems and architectures.
-
For Task size, choose the CPU and memory values to reserve for the task. The CPU value is specified as vCPUs and memory is specified as GB.
For tasks hosted on Fargate, the following table shows the valid CPU and memory combinations.
CPU value Memory value Operating systems supported for AWS Fargate 256 (.25 vCPU) 512 MiB, 1 GB, 2 GB Linux 512 (.5 vCPU) 1 GB, 2 GB, 3 GB, 4 GB Linux 1024 (1 vCPU) 2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB Linux, Windows 2048 (2 vCPU) Between 4 GB and 16 GB in 1 GB increments Linux, Windows 4096 (4 vCPU) Between 8 GB and 30 GB in 1 GB increments Linux, Windows 8192 (8 vCPU) Note
This option requires Linux platform
1.4.0
or later.Between 16 GB and 60 GB in 4 GB increments Linux 16384 (16vCPU) Note
This option requires Linux platform
1.4.0
or later.Between 32 GB and 120 GB in 8 GB increments Linux For tasks hosted on Amazon EC2, supported task CPU values are between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs). To specify the memory value in GB, enter GB after the value. For example, to set the Memory value to 3GB, enter 3GB.
Note
Task-level CPU and memory parameters are ignored for Windows containers.
-
For Network mode, choose the network mode to use. The default is awsvpc mode. For more information, see Amazon ECS task networking.
If you choose bridge, under Port mappings, for Host port, enter the port number on the container instance to reserve for your container.
-
(Optional) Expand the Task roles section to configure the AWS Identity and Access Management (IAM) roles for the task:
-
For Task role, choose the IAM role to assign to the task. A task IAM role provides permissions for the containers in a task to call AWS API operations.
-
For Task execution role, choose the role.
For information about when to use a task execution role, see Amazon ECS task execution IAM role. If you don't need the role, choose None.
-
-
(Optional) Expand the Task placement section to add placement contraints. Task placement constraints allow you to filter the container instances used for the placement of your tasks using built-in or custom attributes.
-
(Optional) Expand the Fault injection section to enable fault injection. Fault injection lets you test how your application responds to certain impairment scenarios.
-
For each container to define in your task definition, complete the following steps.
-
For Name, enter a name for the container.
-
For Image URI, enter the image to use to start a container. Images in the Amazon ECR Public Gallery registry can be specified by using the Amazon ECR Public registry name only. For example, if
public.ecr.aws/ecs/amazon-ecs-agent:latest
is specified, the Amazon Linux container hosted on the Amazon ECR Public Gallery is used. For all other repositories, specify the repository by using either therepository-url/image:tag
orrepository-url/image@digest
formats. -
If your image is in a private registry outside of Amazon ECR, under Private registry, turn on Private registry authentication. Then, in Secrets Manager ARN or name, enter the Amazon Resource Name (ARN) of the secret.
-
For Essential container, if your task definition has two or more containers defined, you can specify whether the container should be considered essential. When a container is marked as Essential, if that container stops, then the task is stopped. Each task definition must contain at least one essential container.
-
A port mapping allows the container to access ports on the host to send or receive traffic. Under Port mappings, do one of the following:
-
When you use the awsvpc network mode, for Container port and Protocol, choose the port mapping to use for the container.
-
When you use the bridge network mode, for Container port and Protocol, choose the port mapping to use for the container.
Choose Add more port mappings to specify additional container port mappings.
-
-
To give the container read-only access to its root file system, for Read only root file system, select Read only.
-
(Optional) To define the container-level CPU, GPU, and memory limits that are different from task-level values, under Resource allocation limits, do the following:
-
For CPU, enter the number of CPU units that the Amazon ECS container agent reserves for the container.
-
For GPU, enter the number of GPU units for the container instance.
An Amazon EC2 instance with GPU support has 1 GPU unit for every GPU. For more information, see Amazon ECS task definitions for GPU workloads.
-
For Memory hard limit, enter the amount of memory, in GB, to present to the container. If the container attempts to exceed the hard limit, the container stops.
-
The Docker 20.10.0 or later daemon reserves a minimum of 6 mebibytes (MiB) of memory for a container, so don't specify fewer than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container, so don't specify fewer than 4 MiB of memory for your containers.
-
For Memory soft limit, enter the soft limit (in GB) of memory to reserve for the container.
When system memory is under contention, Docker attempts to keep the container memory to this soft limit. If you don't specify task-level memory, you must specify a non-zero integer for one or both of Memory hard limit and Memory soft limit. If you specify both, Memory hard limit must be greater than Memory soft limit.
This feature is not supported on Windows containers.
-
-
(Optional) Expand the Environment variables section to specify environment variables to inject into the container. You can specify environment variables either individually by using key-value pairs or in bulk by specifying an environment variable file that's hosted in an Amazon S3 bucket. For information about how to format an environment variable file, see Pass an individual environment variable to an Amazon ECS container.
When you specify an environment variable for secret storage, for Key, enter the secret name. Then for ValueFrom, enter the full ARN of the Systems Manager Parameter Store secret or Secrets Manager secret
-
(Optional) Select the Use log collection option to specify a log configuration. For each available log driver, there are log driver options to specify. The default option sends container logs to Amazon CloudWatch Logs. The other log driver options are configured by using AWS FireLens. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner.
The following describes each container log destination in more detail.
-
Amazon CloudWatch – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
-
Export logs to Splunk – Configure the task to send container logs to the Splunk driver that sends the logs to a remote service. You must enter the URL to your Splunk web service. The Splunk token is specified as a secret option because it can be treated as sensitive data.
-
Export logs to Amazon Data Firehose – Configure the task to send container logs to Firehose. The default log driver options are provided, which sends log to an Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
-
Export logs to Amazon Kinesis Data Streams – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to a Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
-
Export logs to Amazon OpenSearch Service – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided.
-
Export logs to Amazon S3 – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.
-
-
(Optional) Configure additional container parameters.
To configure this option Do this Restart policy
These options define a restart policy to restart a container when it exits.
Expand Restart policy, and then configure the following items: -
To enable a restart policy for the container, turn on Enable Restart policy.
-
For Ignored exit codes, specify a comma-separated list of integer container exit codes. If the container exits with any of the specified exit codes, Amazon ECS will not try to restart the container. If nothing is specified, Amazon ECS will not ignore any exit codes.
-
For Attempt reset period, specify an integer period of time, in seconds, that the container must run for before a restart can be attempted in the event of an exit. Amazon ECS can attempt to restart a container only once every Attempt reset period seconds. If nothing is specified, the container must run for 300 seconds before a restart can be attempted.
HealthCheck
These are the commands that determine if a container is healthy. For more information, see Determine Amazon ECS task health using container health checks.
Expand HealthCheck, and then configure the following items: -
For Command, enter a comma-separated list of commands. You can start the commands with
CMD
to run the command arguments directly, orCMD-SHELL
to run the command with the container's default shell. If neither is specified,CMD
is used. -
For Interval, enter the number of seconds between each health check. The valid values are between 5 and 30.
-
For Timeout, enter the period of time (in seconds) to wait for a health check to succeed before it's considered a failure. The valid values are between 2 and 60.
-
For Start period, enter the period of time (in seconds) to wait for a container to bootstrap before the health check commands run. The valid values are between 0 and 300.
-
For Retries, enter the number of times to retry the health check commands when there is a failure. The valid values are between 1 and 10.
Startup dependency ordering This option defines dependencies for container startup and shutdown. A container can contain multiple dependencies.
Expand Startup dependency ordering, and then configure the following: -
Choose Add container dependency.
-
For Container, choose the container.
-
For Condition, choose the startup dependency condition.
Container timeouts These options determine when to start and stop a container.
Expand Container timeouts, and then configure the following: -
To configure the time to wait before giving up on resolving dependencies for a container, for Start timeout, enter the number of seconds.
-
To configure the time to wait before the container is stopped if it doesn't exit normally on its own, for Stop timeout, enter the number of seconds.
Container network settings These options determine whether to use networking within a container.
Expand Container network settings, and then configure the following: -
To disable container networking, select Turn off networking.
-
To configure DNS server IP addresses that are presented to the container, in DNS servers, enter the IP address of each server on a separate line.
-
To configure DNS domains to search non-fully-qualified host names that are presented to the container, in DNS search domains, enter each domain on a separate line.
The pattern is
^[a-zA-Z0-9-.]{0,253}[a-zA-Z0-9]$
. -
To configure the container host name, in Host name, enter the container goat name.
-
To add hostnames and IP address mappings that are appended to the
/etc/hosts
file on the container, choose Add extra host, and then for Hostname and IP address, enter the host name and IP address.
Docker configuration These override the values in the Dockerfile.
Expand Docker configuration, and then configure the following items:
-
For Command, enter an executable command for a container.
This parameter maps to
Cmd
in the Create a containersection of the Docker Remote API and the COMMAND
option todocker run
. This parameter overrides theCMD
instruction in a Dockerfile. -
For Entry point, enter the Docker ENTRYPOINT that is passed to the container.
This parameter maps to
Entrypoint
in the Create a containersection of the Docker Remote API and the --entrypoint
option todocker run
. This parameter overrides theENTRYPOINT
instruction in a Dockerfile. -
For Working directory, enter the directory that the container will run any entry point and command instructions provided.
This parameter maps to
WorkingDir
in the Create a containersection of the Docker Remote API and the --workdir
option todocker run
. This parameter overrides theWORKDIR
instruction in a Dockerfile.
Resource limits (Ulimits) These values overwrite the default resource quota setting for the operating system.
This parameter maps to
Ulimits
in the Create a containersection of the Docker Remote API and the --ulimit
option to docker run. Expand Resource limits (ulimits), and then choose Add ulimit. For Limit name, choose the limit. Then, for Soft limit and Hard limit, enter the values.
To add additional ulimits, choose Add ulimit.
Docker labels This option adds metadata to your container.
This parameter maps to
Labels
in the Create a containersection of the Docker Remote API and the --label
option to docker run. Expand Docker labels, choose Add key value pair, and then enter the Key and Value.
To add additional Docker labels, choose Add key value pair.
-
-
(Optional) Choose Add more containers to add additional containers to the task definition.
-
-
(Optional) The Storage section is used to expand the amount of ephemeral storage for tasks hosted on Fargate. You can also use this section to add a data volume configuration for the task.
-
To expand the available ephemeral storage beyond the default value of 20 gibibytes (GiB) for your Fargate tasks, for Amount, enter a value up to 200 GiB.
-
-
(Optional) To add a data volume configuration for the task definition, choose Add volume, and then follow these steps.
-
For Volume name, enter a name for the data volume. The data volume name is used when creating a container mount point.
-
For Volume configuration, select whether you want to configure your volume when creating the task definition or during deployment.
Note
Volumes that can be configured when creating a task definition include Bind mount, Docker, Amazon EFS, and Amazon FSx for Windows File Server. Volumes that can be configured at deployment when running a task, or when creating or updating a service include Amazon EBS.
-
For Volume type, select a volume type compatible with the configuration type that you selected, and then configure the volume type.
Volume type Steps Bind mount
-
Choose Add mount point, and then configure the following:
-
For Container, choose the container for the mount point.
-
For Source volume, choose the data volume to mount to the container.
-
For Container path, enter the path on the container to mount the volume.
-
For Read only, select whether the container has read-only access to the volume.
-
-
To add additional mount points, Add mount point.
EFS -
For File system ID, choose the Amazon EFS file system ID.
-
(Optional) For Root directory, enter the directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used.
If you plan to use an EFS access point, leave this field blank.
-
(Optional) For Access point, choose the access point ID to use.
-
(Optional) To encrypt the data between the Amazon EFS file system and the Amazon ECS host or to use the task execution role when mounting the volume, choose Advanced configurations, and then configure the following:
-
To encrypt the data between the Amazon EFS file system and the Amazon ECS host, select Transit encryption, and then for Port, enter the port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS Mount Helper in the Amazon Elastic File System User Guide.
-
To use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system, select IAM authorization.
-
-
Choose Add mount point, and then configure the following:
-
For Container, choose the container for the mount point.
-
For Source volume, choose the data volume to mount to the container.
-
For Container path, enter the path on the container to mount the volume.
-
For Read only, select whether the container has read-only access to the volume.
-
-
To add additional mount points, Add mount point.
Docker
-
For Driver, enter the Docker volume configuration. Windows containers support only the use of the local driver. To use bind mounts, specify a host.
-
For Scope, choose the volume lifecycle.
-
To have the lifecycle last when the task starts and stops, choose Task.
-
To have the volume persist after the task stops, choose Shared.
-
-
Choose Add mount point, and then configure the following:
-
For Container, choose the container for the mount point.
-
For Source volume, choose the data volume to mount to the container.
-
For Container path, enter the path on the container to mount the volume.
-
For Read only, select whether the container has read-only access to the volume.
-
-
To add additional mount points, Add mount point.
FSx for Windows File Server -
For File system ID, choose the FSx for Windows File Server file system ID.
-
For Root directory, enter the directory, enter the directory within the FSx for Windows File Server file system to mount as the root directory inside the host.
-
For Credential parameter, choose how the credentials are stored.
-
To use AWS Secrets Manager, enter the Amazon Resource Name (ARN) of a Secrets Manager secret.
-
To use AWS Systems Manager, enter the Amazon Resource Name (ARN) of a Systems Manager parameter.
-
-
For Domain, enter the fully qualified domain name that's hosted by an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory or a self-hosted EC2 Active Directory.
-
Choose Add mount point, and then configure the following:
-
For Container, choose the container for the mount point.
-
For Source volume, choose the data volume to mount to the container.
-
For Container path, enter the path on the container to mount the volume.
-
For Read only, select whether the container has read-only access to the volume.
-
-
To add additional mount points, Add mount point.
Amazon EBS
-
Choose Add mount point, and then configure the following:
-
For Container, choose the container for the mount point.
-
For Source volume, choose the data volume to mount to the container.
-
For Container path, enter the path on the container to mount the volume.
-
For Read only, select whether the container has read-only access to the volume.
-
-
To add additional mount points, Add mount point.
-
-
To add a volume from another container, choose Add volume from, and then configure the following:
-
For Container, choose the container.
-
For Source, choose the container which has the volume you want to mount.
-
For Read only, select whether the container has read-only access to the volume.
-
-
(Optional) To configure your application trace and metric collection settings by using the AWS Distro for OpenTelemetry integration, expand Monitoring, and then select Use metric collection to collect and send metrics for your tasks to either Amazon CloudWatch or Amazon Managed Service for Prometheus. When this option is selected, Amazon ECS creates an AWS Distro for OpenTelemetry container sidecar that is preconfigured to send the application metrics. For more information, see Correlate Amazon ECS application performance using application metrics.
-
When Amazon CloudWatch is selected, your custom application metrics are routed to CloudWatch as custom metrics. For more information, see Exporting application metrics to Amazon CloudWatch.
Important
When exporting application metrics to Amazon CloudWatch, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon CloudWatch.
-
When you select Amazon Managed Service for Prometheus (Prometheus libraries instrumentation), your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For Workspace remote write endpoint, enter the remote write endpoint URL for your Prometheus workspace. For Scraping target, enter the host and port the AWS Distro for OpenTelemetry collector can use to scrape for metrics data. For more information, see Exporting application metrics to Amazon Managed Service for Prometheus.
Important
When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus.
-
When you select Amazon Managed Service for Prometheus (OpenTelemetry instrumentation), your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For Workspace remote write endpoint, enter the remote write endpoint URL for your Prometheus workspace. For more information, see Exporting application metrics to Amazon Managed Service for Prometheus.
Important
When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus.
-
-
(Optional) Expand the Tags section to add tags, as key-value pairs, to the task definition.
-
[Add a tag] Choose Add tag, and then do the following:
-
For Key, enter the key name.
-
For Value, enter the key value.
-
-
[Remove a tag] Next to the tag, choose Remove tag.
-
-
Choose Create to register the task definition.
- Amazon ECS console JSON editor
-
Open the console at https://console.aws.amazon.com/ecs/v2
. -
In the navigation pane, choose Task definitions.
-
On the Create new task definition menu, choose Create new task definition with JSON.
-
In the JSON editor box, edit your JSON file,
The JSON must pass the validation checks specified in JSON validation.
-
Choose Create.