Creating a task definition using the console - Amazon Elastic Container Service

Creating a task definition using the console

Create your task definitions using the Amazon ECS console. To make the task definition creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context.

You can create a task definition by stepping through the console, or by editing a JSON file.

JSON validation

The Amazon ECS console JSON editor validates the following in the JSON file:

  • The file is a valid JSON file

  • The file does not contain any extraneous keys

  • The file contains the familyName parameter

  • There is at least one entry under containerDefinitions

AWS CloudFormation stacks

The following behavior applies to task definitions created in the new console before January 12, 2023.

When you create a task definition, the Amazon ECS console automatically creates a CloudFormation stack that has a name that begins with "ECS-Console-V2-TaskDefinition-". If you used the AWS CLI or SDK to deregister the task definition, then you must manually delete the task definition stack. For more information, see Deleting a Stack in the AWS CloudFormation User Guide.

Task definitions created after January 12, 2023 will not have a CloudFormation stack automatically created.

Amazon ECS console
  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. In the navigation pane, choose Task definitions

  3. Choose Create new task definition, Create new task definition.

  4. For Task definition family, specify a unique name for the task definition.

  5. For Launch type, choose the application environment. The console default is AWS Fargate (serverless). Amazon ECS performs validation using this value to ensure the task definition parameters are valid for the infrastructure type.

  6. For Operating system/Architecture, choose the operating system and CPU architecture for the task.

    To run your task on a 64-bit ARM architecture, select Linux/ARM64. For more information, see Runtime platform.

    To run your AWS Fargate tasks on Windows containers, choose a supported Windows operating system. For more information, see Task Operating Systems.

  7. For Task size, choose the CPU and memory values to reserve for the task. The CPU value is specified as vCPUs and memory is specified as GB.

    For tasks hosted on Fargate, the following table shows the valid CPU and memory combinations.

    CPU value

    Memory value

    Operating systems supported for AWS Fargate

    256 (.25 vCPU)

    512 MiB, 1 GB, 2 GB

    Linux

    512 (.5 vCPU)

    1 GB, 2 GB, 3 GB, 4 GB

    Linux

    1024 (1 vCPU)

    2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB

    Linux, Windows

    2048 (2 vCPU)

    Between 4 GB and 16 GB in 1 GB increments

    Linux, Windows

    4096 (4 vCPU)

    Between 8 GB and 30 GB in 1 GB increments

    Linux, Windows

    8192 (8 vCPU)

    Note

    This option requires Linux platform 1.4.0 or later.

    Between 16 GB and 60 GB in 4 GB increments

    Linux

    16384 (16vCPU)

    Note

    This option requires Linux platform 1.4.0 or later.

    Between 32 GB and 120 GB in 8 GB increments

    Linux

    For tasks hosted on Amazon EC2, supported task CPU values are between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs).

    Note

    Task-level CPU and memory parameters are ignored for Windows containers.

  8. For Network mode, choose the network mode to use. The default is awsvpc mode. For more information, see Amazon ECS task networking.

    If you choose bridge,under Port mappings, for Host port, enter the port number on the container instance to reserve for your container.

  9. (Optional) Expand the Task roles section to configure the IAM roles:

    1. For Task role, choose the IAM role to assign to the task. A task IAM role provides permissions for the containers in a task to call AWS APIs.

    2. For Task execution role, choose the role.

      For information about when to use a task execution role, see Amazon ECS task execution IAM role. If you do not need the role, choose None.

  10. For each container to define in your task definition, complete the following steps.

    1. For Name, enter a name for the container.

    2. For Image URI, enter the image to use to start a container. Images in the Amazon ECR Public Gallery registry may be specified using the Amazon ECR Public registry name only. For example, if public.ecr.aws/ecs/amazon-ecs-agent:latest is specified, the Amazon Linux container hosted on Amazon ECR Public Gallery is used. For all other repositories, specify the repository using either the repository-url/image:tag or repository-url/image@digest formats.

    3. If your image is in a private registry outside of Amazon ECR, under Private registry, turn on Private registry authentication. Then, in Secrets Manager ARN or name, enter the Amazon Resource Name (ARN) of the secret.

    4. For Essential container, if your task definition has two or more containers defined, you may specify whether the container should be considered essential. If a container is marked as Essential, if that container stops then the task is stopped. Each task definition must contain at least one essential container.

    5. A port mapping allows the container to access ports on the host to send or receive traffic. Under Port mappings, do one of the following:

      • When you use the awsvpc network mode, for Container port and Protocol, choose the port mapping to use for the container.

      • When you use the bridge network mode, for Container port and Protocol, choose the port mapping to use for the container.

      Choose Add more port mappings to specify additional container port mappings.

    6. To give the container read-only access to its root file system, for Read only root file system, select Read only.

    7. (Optional) To define the container-level CPU, GPU, and memory limits that are different from task-level values under Resource allocation limits, do the following:

      • For CPU, enter the number of CPU units the Amazon ECS container agent reserves for the container.

      • For GPU, enter the number of GPU units for the container instance.

        An Amazon EC2 instance with GPU support has 1 GPU unit for every GPU. For more information, see Working with GPUs on Amazon ECS.

      • For Memory hard limit, enter the amount of memory, in GB to present to the container. If the container attempts to exceed the hard limit, the container stops.

      • The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container, so you should not specify fewer than 6 MiB of memory for your containers.

        The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container, so you should not specify fewer than 4 MiB of memory for your containers.

      • For Memory soft limit, enter the soft limit (in GB) of memory to reserve for the container.

        When system memory is under contention, Docker attempts to keep the container memory to this soft limit. If you don't specify task-level memory, you must specify a non-zero integer for one or both of Memory hard limit and Memory soft limit. If you specify both, Memory hard limit must be greater than Memory soft limit.

        This is not supported on Windows containers.

    8. (Optional) Expand the Environment variables section to specify environment variables to inject into the container. You can specify environment variables either individually using key-value pairs or in bulk by specifying an environment variable file hosted in an Amazon S3 bucket. For information on how to format an environment variable file, see Passing environment variables to a container.

    9. (Optional) Select the Use log collection option to specify a log configuration. For each available log driver, there are log driver options to specify. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see Custom log routing.

      The following describes each container log destination in more detail.

      • Amazon CloudWatch — Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided which creates a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.

      • Export logs to Splunk— Configure the task to send container logs to the splunk driver that sends the logs to a remote service. You need to enter the URL to your Splunk web service and the Splunk token is specified as a secret option because it can be treated as sensitive data.

      • Export logs to Amazon Kinesis Data Firehose — Configure the task to send container logs to Kinesis Data Firehose. The default log driver options are provided which sends logs to an Kinesis Data Firehose delivery stream. To specify a different delivery stream name, change the driver option values.

      • Export logs to Amazon Kinesis Data Streams — Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided which sends logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.

      • Export logs to Amazon OpenSearch Service — Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. For more information, see Forwarding logs to an Amazon OpenSearch Service domain.

      • Export logs to Amazon S3 — Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided but you must specify a valid Amazon S3 bucket name.

    10. (Optional) Configure additional container parameters.

      To configure this option Do this

      Healthcheck

      These are the commands that determine if a container is healthy

      Expand HealthCheck, and then configure the following items:
      • For Command, enter a comma-separated list of commands. You can start the commands with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell. If neither is specified, CMD is used.

      • For Interval, enter the number of seconds between each health check. The valid values are between 5 and 30.

      • For Timeout, enter the period of time (in seconds) to wait for a health check to succeed before it's considered a failure. The valid values are between 2 and 60.

      • For Start period, enter the period of time (in seconds) to wait for a container to bootstrap before the health check commands run. The valid values are between 0 and 300.

      • For Retries, enter the number of times to retry the health check commands when there is a failure. The value values are between 1 and 10.

      Container timeouts

      These options determine when to start and stop a container.

      Expand Container timeouts, and then configure the following:
      • To configure the time wait before giving up on resolving dependencies for a container, for Start time, enter the number of seconds.

      • To configure the time to wait before the container is stopped if it doesn't exit normally on its own, for Stop time, enter the number of seconds.

      Container network settings

      These options determine whether to use networking within a container.

      Expand Container network settings, and then configure the following:
      • To disable container networking, select Turn off networking.

      • To configure DNS server IP addresses that are presented to the container, in DNS servers, enter the IP address of each server on a separate line.

      • To configure DNS domains to search non-fully-qualified hostnames that are presented to the container, in DNS search domains, enter each domain on a separate line.

        The pattern is ^[a-zA-Z0-9-.]{0,253}[a-zA-Z0-9]$.

      • To configure the container host name, in Host name, enter the container goat name.

      • To add hostnames and IP address mappings that are appended to the /etc/hosts file on the container, choose Add extra host, and then for Hostname and IP address, enter the host name and IP address.

      Docker configuration

      These override the values in the Dockerfile.

      Expand Docker configuration, and then configure the following items:

      • For Command, enter an executable command for a container.

        This parameter maps to Cmd in the Create a container section of the Docker Remote API and the COMMAND option to docker run. This will override the CMD instruction in a Dockerfile.

      • For Entry point, enter the Docker ENTRYPOINT that is passed to the container.

        This parameter maps to Entrypoint in the Create a container section of the Docker Remote API and the --entrypoint option to docker run. This will override the ENTRYPOINT instruction in a Dockerfile.

      • For Working directory, enter the directory that the container will run any entry point and command instructions provided.

        This parameter maps to WorkingDir in the Create a container section of the Docker Remote API and the --workdir option to docker run. This will override the WORKDIR instruction in a Dockerfile.

      Ulimits

      These values overwrite the default resource quota setting for the operating system.

      This parameter maps to Ulimits in the Create a container section of the Docker Remote API and the --ulimit option to docker run.

      Expand Resource limits (ulimits), and then choose Add ulimit. For Limit name choose the limit. Then, for Soft limit and Hard limit, enter the values.

      To add additional ulimits, choose Add ulimit.

      Docker labels

      This option adds metadata to your container.

      This parameter maps to Labels in the Create a container section of the Docker Remote API and the --label option to docker run.

      Expand Docker labels, choose Add key value pair, and then enter the Key and Value.

      To add additional ulimits, choose Add key value pair.

      Container startup order

      This option defines dependencies for container startup and shutdown. A container can contain multiple dependencies.

      Expand Startup dependency ordering, and then configure the following:
      1. Choose Add container dependency.

      2. For Container, choose the container.

      3. For Condition, choose the startup dependency condition.

      To add an additional dependency, choose Add container dependency.
    11. (Optional) Choose Add more containers to add additional containers to the task definition. Choose Next after you define all your containers.

  11. (Optional) The Storage section is used to expand the amount of ephemeral storage for tasks hosted on Fargate as well as add a data volume configuration for the task.

    1. To expand the available ephemeral storage beyond the default value of 20 GiB for your Fargate tasks, for Amount, enter a value up to 200 GiB.

  12. (Optional) To add a data volume configuration for the task definition, choose Add volume, and then configure the volume type.

    Volume type Steps

    Bind mount

    1. For Volume type, choose Bind mount.

    2. For Volume name, enter a name for the data volume. The data volume name is used when creating a container mount point.

    3. Choose Add mount point, and then configure the following:

      • For Container, choose the container for the mount point.

      • For Source volume, choose the data volume to mount to the container.

      • For Container path, enter the path on the container to mount the volume.

      • For Read only, select whether the container has read-only access to the volume.

    4. To add additional mount points, Add mount point.

    EFS
    1. For Volume type, choose EFS.

    2. For Volume name, enter a name for the data volume.

    3. For File system ID, choose the Amazon EFS file system ID.

    4. (Optional) For Root directory, enter the directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used.

      If you plan to use an EFS access point, leave this field blank.

    5. (Optional) For Access point, choose the access point ID to use.

    6. (Optional) To encrypt the data between the Amazon EFS file system and the Amazon ECS host or to use the task execution role when mounting the volume, choose Advanced configurations, and then configure the following:

      • To encrypt the data between the Amazon EFS file system and the Amazon ECS host, select Transit encryption, and then for Port, enter the port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS Mount Helper in the Amazon Elastic File System User Guide.

      • To use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system, select IAM authorization.

    7. Choose Add mount point, and then configure the following:

      • For Container, choose the container for the mount point.

      • For Source volume, choose the data volume to mount to the container.

      • For Container path, enter the path on the container to mount the volume.

      • For Read only, select whether the container has read-only access to the volume.

    8. To add additional mount points, Add mount point.

    Docker

    1. For Volume type, choose Docker volume.

    2. For Volume name, enter a name for the data volume. The data volume name is used when creating a container mount point.

    3. For Driver, enter the Docker volume configuration. Windows containers only support the use of the local driver. To use bind mounts, specify a host.

    4. For Scope, choose the volume lifecycle.

      • To have the lifecycle last when the task starts and stops, choose Task.

      • To have the volume persist after the task stops, choose Shared.

    5. Choose Add mount point, and then configure the following:

      • For Container, choose the container for the mount point.

      • For Source volume, choose the data volume to mount to the container.

      • For Container path, enter the path on the container to mount the volume.

      • For Read only, select whether the container has read-only access to the volume.

    6. To add additional mount points, Add mount point.

    FSx for Windows File Server
    1. For Volume type, choose FSx for Windows File Server.

    2. For File system ID, choose the FSx for Windows File Server file system ID.

    3. For Root directory, enter the directory, enter the directory within the FSx for Windows File Server file system to mount as the root directory inside the host..

    4. For Credential parameter, choose how the credentials are stored.

      • To use Secrets Manager, enter the Amazon Resource Name (ARN) of a Secrets Manager secret.

      • To use Systems Manager, enter Amazon Resource Name (ARN) of a Systems Manager parameter.

    5. For Domain, enter the fully qualified domain name that's hosted by an AWS Directory Service Managed Microsoft AD (Active Directory) or self-hosted EC2 AD.

    6. Choose Add mount point, and then configure the following:

      • For Container, choose the container for the mount point.

      • For Source volume, choose the data volume to mount to the container.

      • For Container path, enter the path on the container to mount the volume.

      • For Read only, select whether the container has read-only access to the volume.

    7. To add additional mount points, Add mount point.

  13. To add a volume from another container, choose Add volume from, and then configure the following:

    • For Container, choose the container.

    • For Source, choose the container which has the volume you want to mount.

    • For Read only, select whether the container has read-only access to the volume.

  14. (Optional) To configure your application trace and metric collection settings using the AWS Distro for OpenTelemetry integration, expand Monitoring, and then select the Use metric collection to collect and send metrics for your tasks to either Amazon CloudWatch or Amazon Managed Service for Prometheus. When this option is selected, Amazon ECS creates an AWS Distro for OpenTelemetry container sidecar which is preconfigured to send the application metrics. For more information, see Collecting application metrics.

    1. When Amazon CloudWatch is selected, your custom application metrics are routed to CloudWatch as custom metrics. For more information, see Exporting application metrics to Amazon CloudWatch.

      Important

      When exporting application metrics to Amazon CloudWatch, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon CloudWatch.

    2. When you select Amazon Managed Service for Prometheus (Prometheus libraries instrumentation), your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For Workspace remote write endpoint, enter the remote write endpoint URL for your Prometheus workspace. For Scraping target, enter the host and port the AWS Distro for OpenTelemetry collector can use to scrape for metrics data. For more information, see Exporting application metrics to Amazon Managed Service for Prometheus.

      Important

      When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus.

    3. When you select Amazon Managed Service for Prometheus (OpenTelemetry instrumentation) , your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For Workspace remote write endpoint, enter the remote write endpoint URL for your Prometheus workspace. For more information, see Exporting application metrics to Amazon Managed Service for Prometheus.

      Important

      When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus.

  15. (Optional) Expand the Tags section to add tags, as key-value pairs, to the task definition.

    • [Add a tag] Choose Add tag, and then do the following:

      • For Key, enter the key name.

      • For Value, enter the key value.

    • [Remove a tag] Next to the tag, choose Remove tag.

  16. Choose Create to register the task definition.

Amazon ECS console JSON editor
  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. In the navigation pane, choose Task definitions.

  3. Choose Create new task definition, Create new task definition with JSON.

  4. In the JSON editor box, edit your JSON file,

    The JSON must pass the validation checks specified in JSON validation.

  5. Choose Create.