Task definition parameters
Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family and container definitions are required in a task definition. In contrast, task role, network mode, volumes, task placement constraints, and launch type are optional.
You can use these parameters in a JSON file to configure your task definition. For more information, see Example task definitions.
The following are more detailed descriptions for each task definition parameter.
Family
family
-
Type: string
Required: yes
When you register a task definition, you give it a family, which is similar to a name for multiple versions of the task definition, specified with a revision number. The first task definition that's registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number.
Launch types
When you register a task definition, you can specify a launch type that Amazon ECS should validate the task definition against. If the task definition doesn't validate against the compatibilities specified, a client exception is returned. For more information, see Amazon ECS launch types.
The following parameter is allowed in a task definition.
Task execution role
executionRoleArn
-
Type: string
Required: conditional
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf.
Note The task execution IAM role is required depending on the requirements of your task. For more information, see Amazon ECS task execution IAM role.
Network mode
networkMode
-
Type: string
Required: no
The Docker networking mode to use for the containers in the task. For Amazon ECS tasks hosted on Fargate, the
awsvpc
network mode is required.When the network mode is
awsvpc
, the task is allocated an elastic network interface, and you must specify aNetworkConfiguration
when you create a service or run a task with the task definition. For more information, see Fargate Task Networking in the Amazon Elastic Container Service User Guide for AWS Fargate.The
awsvpc
network mode offers the highest networking performance for containers because they use the Amazon EC2 network stack. Exposed container ports are mapped directly to the attached elastic network interface port. Because of this, you can't use dynamic host port mappings.
Runtime platform
The following parameters are required for Fargate launch types.
operatingSystemFamily
-
Type: string
Required: Conditional
Default: LINUX
This parameter is required for Amazon ECS tasks that are hosted on Fargate.
When you register a task definition, you specify the operating system family.
The valid values for Amazon ECS tasks that are hosted on Fargate are
LINUX
,WINDOWS_SERVER_2019_FULL
,WINDOWS_SERVER_2019_CORE
,WINDOWS_SERVER_2022_FULL
, andWINDOWS_SERVER_2022_CORE
.The valid values for Amazon ECS tasks hosted on EC2 are
LINUX
,WINDOWS_SERVER_2022_CORE
,WINDOWS_SERVER_2022_FULL
,WINDOWS_SERVER_2019_FULL
, andWINDOWS_SERVER_2019_CORE
,WINDOWS_SERVER_2016_FULL
,WINDOWS_SERVER_2004_CORE
, andWINDOWS_SERVER_20H2_CORE
.All task definitions that are used in a service must have the same value for this parameter.
When a task definition is part of a service, this value must match the service
platformFamily
value. cpuArchitecture
-
Type: string
Required: Conditional
Default: X86_64
This parameter is required for Amazon ECS tasks hosted on Fargate.
When you register a task definition, you specify the CPU architecture. The valid values are
X86_64
andARM64
.All task definitions that are used in a service must have the same value for this parameter.
When you have Linux tasks for either the Fargate launch type, or the EC2 launch type, you can set the value to
ARM64
. For more information, see Working with 64-bit ARM workloads on Amazon ECS.
Task size
When you register a task definition, you can specify the total CPU and memory used for
the task. This is separate from the cpu
and memory
values at
the container definition level. For tasks that are hosted on Amazon EC2 instances, these
fields are optional. For tasks that are hosted on Fargate (both Linux and Windows),
these fields are required and there are specific values for both cpu
and
memory
that are supported.
Task-level CPU and memory parameters are ignored for Windows containers. We recommend specifying container-level resources for Windows containers.
The following parameter is allowed in a task definition:
cpu
-
Type: string
Required: conditional
Note This parameter is not supported for Windows containers.
The hard limit of CPU units to present for the task. It can be expressed as an integer using CPU units (for example,
1024
) or as a string using vCPUs (for example,1 vCPU
or1 vcpu
) in a task definition. When the task definition is registered, a vCPU value is converted to an integer indicating the CPU units.For tasks that run on Fargate (both Linux and Windows containers), this field is required and you must use one of the following values, which determines your range of supported values for the
memory
parameter:CPU value
Memory value
Operating systems supported for AWS Fargate
256 (.25 vCPU)
512 MiB, 1 GB, 2 GB
Linux
512 (.5 vCPU)
1 GB, 2 GB, 3 GB, 4 GB
Linux
1024 (1 vCPU)
2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB
Linux, Windows
2048 (2 vCPU)
Between 4 GB and 16 GB in 1 GB increments
Linux, Windows
4096 (4 vCPU)
Between 8 GB and 30 GB in 1 GB increments
Linux, Windows
8192 (8 vCPU)
Note This option requires Linux platform
1.4.0
or later.Between 16 GB and 60 GB in 4 GB increments
Linux
16384 (16vCPU)
Note This option requires Linux platform
1.4.0
or later.Between 32 GB and 120 GB in 8 GB increments
Linux
memory
-
Type: string
Required: conditional
Note This parameter is not supported for Windows containers.
The hard limit of memory (in MiB) to present to the task. It can be expressed as an integer using MiB (for example
1024
) or as a string using GB (for example1GB
or1 GB
) in a task definition. When the task definition is registered, a GB value is converted to an integer indicating the MiB.For tasks hosted on Fargate (both Linux and Windows containers), this field is required and you must use one of the following values, which determines your range of supported values for the
cpu
parameter:Memory value (MiB)
CPU value
Operating systems supported for Fargate
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
256 (.25 vCPU)
Linux
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
512 (.5 vCPU)
Linux
2048 (2 GB), 3072 (3 GB), 4096 (4GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
1024 (1 vCPU)
Linux, Windows
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
2048 (2 vCPU)
Linux, Windows
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
4096 (4 vCPU)
Linux, Windows
Between 16 GB and 60 GB in 4 GB increments
Note This option requires Linux platform
1.4.0
or later.8192 (8 vCPU)
Linux
Between 32 GB and 120 GB in 8 GB increments
Note This option requires Linux platform
1.4.0
or later.16384 (16vCPU)
Linux
Container definitions
When you register a task definition, you must specify a list of container definitions that are passed to the Docker daemon on a container instance. The following parameters are allowed in a container definition.
Topics
Standard container definition parameters
The following task definition parameters are either required or used in most container definitions.
Name
name
-
Type: string
Required: yes
The name of a container. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. If you're linking multiple containers in a task definition, the
name
of one container can be entered in thelinks
of another container. This is to connect the containers.
Image
image
-
Type: string
Required: yes
The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. You can also specify other repositories with either
orrepository-url
/image
:tag
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps torepository-url
/image
@digest
Image
in the Create a containersection of the Docker Remote API and the IMAGE
parameter of docker run. -
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image are not propagated to already running tasks.
-
Images in private registries are supported. For more information, see Private registry authentication for tasks.
-
Images in Amazon ECR repositories can be specified by using either the full
registry/repository:tag
orregistry/repository@digest
naming convention (for example,aws_account_id
.dkr.ecr.region
.amazonaws.com/
ormy-web-app
:latest
aws_account_id
.dkr.ecr.region
.amazonaws.com/
).my-web-app
@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE
-
Images in official repositories on Docker Hub use a single name (for example,
ubuntu
ormongo
). -
Images in other repositories on Docker Hub are qualified with an organization name (for example,
amazon/amazon-ecs-agent
). -
Images in other online repositories are qualified further by a domain name (for example,
quay.io/assemblyline/ubuntu
).
-
Memory
memory
-
Type: integer
Required: conditional
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task
memory
value, if one is specified. This parameter maps toMemory
in the Create a containersection of the Docker Remote API and the --memory
option to docker run. If using the Fargate launch type, this parameter is optional.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation
-
Type: integer
Required: no
The soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when needed, up to either the hard limit that's specified with the
memory
parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps toMemoryReservation
in the Create a containersection of the Docker Remote API and the --memory-reservation
option to docker run. If a task-level memory value isn't specified, you must specify a non-zero integer for one or both of
memory
ormemoryReservation
in a container definition. If you specify both,memory
must be greater thanmemoryReservation
. If you specifymemoryReservation
, then that value is subtracted from the available memory resources for the container instance on which the container is placed. Otherwise, the value ofmemory
is used.For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a
memoryReservation
of 128 MiB, and amemory
hard limit of 300 MiB. This configuration allows the container to only reserve 128 MiB of memory from the remaining resources on the container instance. At the same time, it also allows the container to use more memory resources when needed.Note This parameter is not supported for Windows containers.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
Port mappings
portMappings
-
Type: object array
Required: no
Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the
awsvpc
network mode, only specify thecontainerPort
. ThehostPort
can be left blank or it must be the same value as thecontainerPort
.Most fields of this parameter (
containerPort
,hostPort
,protocol
) maps toPortBindings
in the Create a containersection of the Docker Remote API and the --publish
option to docker run. If the network mode of a task definition is set to host
, host ports must either be undefined or match the container port in the port mapping.Note After a task reaches the
RUNNING
status, manual and automatic host and container port assignments are visible in the following locations:-
Console: The Network Bindings section of a container description for a selected task.
-
AWS CLI: The
networkBindings
section of the describe-tasks command output. -
API: The
DescribeTasks
response. -
Metadata: The task metadata endpoint.
appProtocol
-
Type: string
Required: no
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the service connect proxy.If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
For more information, see Service Connect .
Valid protocol values:
"HTTP" | "HTTP2" | "GRPC"
containerPort
-
Type: integer
Required: yes, when
portMappings
are usedThe port number on the container that's bound to the user-specified or automatically assigned host port.
If using containers in a task with the Fargate launch type, exposed ports must be specified using
containerPort
.For Windows containers on Fargate, you can't use port 3150 for the
containerPort
. This is because it's reserved.If using containers in a task with the EC2 launch type and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see
hostPort
. Port mappings that are automatically assigned in this way don't count toward the 100 reserved ports limit of a container instance. containerPortRange
-
Type: string
Required: no
The port number range on the container that's bound to the dynamically mapped host port range.
You can only set this parameter by using the
register-task-definition
API. The option is available in theportMappings
parameter. For more information, see register-task-definition in the AWS Command Line Interface Reference.The following rules apply when you specify a
containerPortRange
:-
You must use either the
bridge
network mode or theawsvpc
network mode. This parameter is available for both the EC2 and AWS Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
-
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the
ecs-init
package -
You can specify a maximum of 100 port ranges per container.
-
You do not specify a
hostPortRange
. The value of thehostPortRange
is set as follows:-
For containers in a task with the
awsvpc
network mode, thehostPort
is set to the same value as thecontainerPort
. This is a static mapping strategy. -
For containers in a task with the
bridge
network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
-
-
The
containerPortRange
valid values are between 1 and 65535. -
A port can only be included in one port mapping per container.
-
You cannot specify overlapping port ranges.
-
The first port in the range must be less than last port in the range.
-
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports.
For more information, see Issue #11185
on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call
DescribeTasks
to view thehostPortRange
which are the host ports that are bound to the container ports.The port ranges are not included in the Amazon ECS task events which are sent to EventBridge. For more information, see Amazon ECS events and EventBridge.
-
hostPortRange
-
Type: string
Required: no
The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent.
hostPort
-
Type: integer
Required: no
The port number on the container instance to reserve for your container.
If using containers in a task with the Fargate launch type, the
hostPort
can either be kept blank or be the same value ascontainerPort
.If using containers in a task with the EC2 launch type, you can specify a non-reserved host port for your container port mapping (this is referred to as static host port mapping), or you can omit the
hostPort
(or set it to0
) while specifying acontainerPort
and your container automatically receives a port (this is referred to as dynamic host port mapping) in the ephemeral port range for your container instance operating system and Docker version.The default ephemeral port range Docker version 1.6.0 and later is listed on the instance under
/proc/sys/net/ipv4/ip_local_port_range
. If this kernel parameter is unavailable, the default ephemeral port range from49153–65535
is used. Don't attempt to specify a host port in the ephemeral port range. This is because these are reserved for automatic assignment. In general, ports below32768
are outside of the ephemeral port range.The default reserved ports are
22
for SSH, the Docker ports2375
and2376
, and the Amazon ECS container agent ports51678-51680
. Any host port that was previously user-specified for a running task is also reserved while the task is running (after a task stops, the host port is released). The current reserved ports are displayed in theremainingResources
of describe-container-instances output, and a container instance might have up to 100 reserved ports at a time, including the default reserved ports. Automatically assigned ports do not count toward the 100 reserved ports limit. name
-
Type: string
Required: no, required for Service Connect to be configured in a service
The name that's used for the port mapping. This parameter only applies to Service Connect. This parameter is the name that you use in the Service Connect configuration of a service.
For more information, see Service Connect .
In the following example, both of the required fields for Service Connect are shown.
"portMappings": [ { "name":
string
, "containerPort":integer
} ] protocol
-
Type: string
Required: no
The protocol that's used for the port mapping. Valid values are
tcp
andudp
. The default istcp
.Important Only
tcp
is supported for Service Connect. Remember thattcp
is implied if this field isn't set.
If you're specifying a host port, use the following syntax.
"portMappings": [ { "containerPort": integer, "hostPort": integer } ... ]
If you want an automatically assigned host port, use the following syntax.
"portMappings": [ { "containerPort": integer } ... ]
-
Advanced container definition parameters
The following advanced container definition parameters provide extended capabilities to the
docker run
Topics
Health check
healthCheck
-
The container health check command and the associated configuration parameters for the container. This parameter maps to
HealthCheck
in the Create a containersection of the Docker Remote API and the HEALTHCHECK
parameter of docker run. Note The Amazon ECS container agent only monitors and reports on the health checks that are specified in the task definition. Amazon ECS doesn't monitor Docker health checks that are embedded in a container image but aren't specified in the container definition. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image.
You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console.
The following describes the possible
healthStatus
values for a container:-
HEALTHY
—The container health check has passed successfully. -
UNHEALTHY
—The container health check has failed. -
UNKNOWN
—The container health check is being evaluated or there's no container health check defined.
The following describes the possible
healthStatus
values for a task. The container health check status of non-essential containers don't have an effect on the health status of a task.-
HEALTHY
—All essential containers within the task have passed their health checks. -
UNHEALTHY
—One or more essential containers have failed their health check. -
UNKNOWN
—The essential containers within the task are still having their health checks evaluated, there are only nonessential containers with health checks defined, or there are no container health checks defined.
If a task is run manually and not as part of a service, it continues its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy, then the task is stopped and the service scheduler replaces it.
The following are notes about container health check support:
-
Container health checks are supported for Fargate tasks if you're using Linux platform version 1.1.0 or later. For more information, see AWS Fargate platform versions.
command
-
A string array representing the command that the container runs to determine if it's healthy. The string array can start with
CMD
to run the command arguments directly, orCMD-SHELL
to run the command with the container's default shell. If neither is specified,CMD
is used.When registering a task definition in the AWS Management Console, use a comma separated list of commands, which are converted to a string after the task definition is created. An example input for a health check is the following.
CMD-SHELL, curl -f http://localhost/ || exit 1
When registering a task definition using the AWS Management Console JSON panel, the AWS CLI, or the APIs, you should enclose the list of commands in brackets. An example input for a health check is the following.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
An exit code of 0, with no
stderr
output, indicates success, and a non-zero exit code indicates failure. For more information, seeHealthCheck
in the Create a containersection of the Docker Remote API . interval
-
The period of time (in seconds) between each health check. You may specify between 5 and 300 seconds. The default value is 30 seconds.
timeout
-
The period of time (in seconds) to wait for a health check to succeed before it's considered a failure. You may specify between 2 and 60 seconds. The default value is 5 seconds.
retries
-
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is three retries.
startPeriod
-
The optional grace period to provide containers time to bootstrap in before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default,
startPeriod
is disabled.
-
Environment
cpu
-
Type: integer
Required: conditional
The number of
cpu
units the Amazon ECS container agent reserves for the container. On Linux, this parameter maps toCpuShares
in the Create a containersection of the Docker Remote API and the --cpu-shares
option to docker run. This field is optional for tasks that use the Fargate launch type. The total amount of CPU reserved for all the containers that are within a task must be lower than the task-level
cpu
value.Note You can determine the number of CPU units that are available to each Amazon EC2 instance type by multiplying the number of vCPUs listed for that instance type on the Amazon EC2 Instances
detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, assume that you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that task is the only task running on the container instance. In this example, the container can use the full 1,024 CPU unit share at any given time. However, assume then that you launched another copy of the same task on that container instance. Each task is guaranteed a minimum of 512 CPU units when needed, and each container can float to higher CPU usage if the other container was not using it. However, if both tasks were 100% active all of the time, they are limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint
in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below two in your container definitions. For CPU values below two (including null), the behavior varies based on your Amazon ECS container agent version: -
Agent versions <= 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of one are passed to Docker as one, which the Linux kernel converts to two CPU shares.
-
Agent versions >= 1.2.0: Null, zero, and CPU values of one are passed to Docker as two CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute quota. Windows containers only have access to the specified amount of CPU that's defined in the task definition. A null or zero CPU value is passed to Docker as
0
, which Windows interprets as 1% of one CPU.For additional examples, see How Amazon ECS manages CPU and memory resources
. -
essential
-
Type: Boolean
Required: no
If the
essential
parameter of a container is marked astrue
, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If theessential
parameter of a container is marked asfalse
, then its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application architecture.
"essential": true|false
entryPoint
-
Important Early versions of the Amazon ECS container agent don't properly handle
entryPoint
parameters. If you have problems usingentryPoint
, update your container agent or enter your commands and arguments ascommand
array items instead.Type: string array
Required: no
The entry point that's passed to the container. This parameter maps to
Entrypoint
in the Create a containersection of the Docker Remote API and the --entrypoint
option to docker run. For more information about the Docker ENTRYPOINT
parameter, see https://docs.docker.com/engine/reference/builder/#entrypoint. "entryPoint": ["string", ...]
command
-
Type: string array
Required: no
The command that's passed to the container. This parameter maps to
Cmd
in the Create a containersection of the Docker Remote API and the COMMAND
parameter to docker run. For more information about the Docker CMD
parameter, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments,make sure that each argument is a separated string in the array. "command": ["string", ...]
workingDirectory
-
Type: string
Required: no
The working directory to run commands inside the container in. This parameter maps to
WorkingDir
in the Create a containersection of the Docker Remote API and the --workdir
option to docker run. "workingDirectory": "string"
environment
-
Type: object array
Required: no
The environment variables to pass to a container. This parameter maps to
Env
in the Create a containersection of the Docker Remote API and the --env
option to docker run. Important We do not recommend using plaintext environment variables for sensitive information, such as credential data.
"environment" : [ { "name" : "string", "value" : "string" }, { "name" : "string", "value" : "string" } ]
secrets
-
Type: Object array
Required: No
An object representing the secret to expose to your container. For more information, see Passing sensitive data to a container.
name
-
Type: String
Required: Yes
The value to set as the environment variable on the container.
valueFrom
-
Type: String
Required: Yes
The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the AWS Secrets Manager secret or the full ARN of the parameter in the AWS Systems Manager Parameter Store.
Note If the Systems Manager Parameter Store parameter exists in the same AWS Region as the task that you're launching, you can use either the full ARN or name of the secret. If the parameter exists in a different Region then the full ARN must be specified.
"secrets": [ { "name": "environment_variable_name", "valueFrom": "arn:aws:ssm:
region
:aws_account_id
:parameter/parameter_name
" } ]
Network settings
dnsServers
-
Type: string array
Required: no
A list of DNS servers that are presented to the container. This parameter maps to
Dns
in the Create a containersection of the Docker Remote API and the --dns
option to docker run. Note This parameter isn't supported for Windows containers or tasks using the
awsvpc
network mode."dnsServers": ["string", ...]
Storage and logging
readonlyRootFilesystem
-
Type: Boolean
Required: no
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to
ReadonlyRootfs
in the Create a containersection of the Docker Remote API and the --read-only
option to docker run. Note This parameter is not supported for Windows containers.
"readonlyRootFilesystem": true|false
mountPoints
-
Type: Object Array
Required: No
The mount points for data volumes in your container.
This parameter maps to
Volumes
in the Create a containersection of the Docker Remote API and the --volume
option to docker run. Windows containers can mount whole directories on the same drive as
$env:ProgramData
. Windows containers cannot mount directories on a different drive, and mount point cannot be across drives.sourceVolume
-
Type: String
Required: Yes, when
mountPoints
are usedThe name of the volume to mount.
containerPath
-
Type: String
Required: Yes, when
mountPoints
are usedThe path on the container to mount the volume at.
readOnly
-
Type: Boolean
Required: No
If this value is
true
, the container has read-only access to the volume. If this value isfalse
, then the container can write to the volume. The default value isfalse
.
volumesFrom
-
Type: Object Array
Required: No
Data volumes to mount from another container. This parameter maps to
VolumesFrom
in the Create a containersection of the Docker Remote API and the --volumes-from
option to docker run. sourceContainer
-
Type: string
Required: yes, when
volumesFrom
is usedThe name of the container to mount volumes from.
readOnly
-
Type: Boolean
Required: no
If this value is
true
, the container has read-only access to the volume. If this value isfalse
, then the container can write to the volume. The default value isfalse
.
"volumesFrom": [ { "sourceContainer": "string", "readOnly": true|false } ]
logConfiguration
-
Type: LogConfiguration Object
Required: no
The log configuration specification for the container.
For example task definitions that use a log configuration, see Example task definitions.
This parameter maps to
LogConfig
in the Create a containersection of the Docker Remote API and the --log-driver
option todocker run
. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation. Consider the following when specifying a log configuration for your containers:
-
Amazon ECS supports a subset of the logging drivers that are available to the Docker daemon (shown in the valid values that follow). Additional log drivers might be available in future releases of the Amazon ECS container agent.
-
This parameter requires version 1.18 or later of the Docker Remote API on your container instance.
-
For tasks that use the Fargate launch type, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
"logConfiguration": { "logDriver": "awslogs","fluentd","gelf","json-file","journald","logentries","splunk","syslog","awsfirelens", "options": {"
string
": "string
" ...}, "secretOptions": [{ "name": "string
", "valueFrom": "string
" }] }logDriver
-
Type: string
Valid values:
"awslogs","fluentd","gelf","json-file","journald","logentries","splunk","syslog","awsfirelens"
Required: yes, when
logConfiguration
is usedThe log driver to use for the container. By default, the valid values that are listed earlier are log drivers that the Amazon ECS container agent can communicate with.
For tasks that use the Fargate launch type, the supported log drivers are
awslogs
,splunk
, andawsfirelens
.For more information about how to use the
awslogs
log driver in task definitions to send your container logs to CloudWatch Logs, see Using the awslogs log driver.For more information about using the
awsfirelens
log driver, see Custom Log Routing.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
options
-
Type: string to string map
Required: no
The configuration options to send to the log driver.
When you use FireLens to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics, you can set the
log-driver-buffer-limit
option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. For more information, see Fluentd buffer limit.This parameter requires version 1.19 of the Docker Remote API or greater on your container instance.
secretOptions
-
Type: object array
Required: no
An object that represents the secret to pass to the log configuration. Secrets used in log configuration may include an authentication token, certificate, or encryption key, for example.) For more information, see Passing sensitive data to a container.
name
-
Type: String
Required: Yes
The value to set as the environment variable on the container.
valueFrom
-
Type: String
Required: Yes
The secret to expose to the log configuration of the container.
"logConfiguration": { "logDriver": "splunk", "options": { "splunk-url": "https://cloud.splunk.com:8080", "splunk-token": "...", "tag": "...", ... }, "secretOptions": [{ "name": "
splunk-token
", "valueFrom": "/ecs/logconfig/splunkcred
" }] }
-
firelensConfiguration
-
Type: FirelensConfiguration Object
Required: No
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom log routing.
{ "firelensConfiguration": { "type": "fluentd", "options": { "KeyName": "" } } }
options
-
Type: String to string map
Required: No
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}
. For more information, see Creating a task definition that uses a FireLens configuration. type
-
Type: String
Required: Yes
The log router to use. The valid values are
fluentd
orfluentbit
.
Security
For more information about container security, see Task and container security in the Amazon ECS Best Practices Guide.
user
-
Type: string
Required: no
The user to use inside the container. This parameter maps to
User
in the Create a containersection of the Docker Remote API and the --user
option to docker run. You can specify the
user
using the following formats. If specifying a UID or GID, you must specify it as a positive integer.-
user
-
user:group
-
uid
-
uid:gid
-
user:gid
-
uid:group
Note This parameter is not supported for Windows containers.
"user": "string"
-
Resource limits
ulimits
-
Type: object array
Required: no
A list of
ulimit
values to define for a container. This value overwrites the default resource quota setting for the operating system. This parameter maps toUlimits
in the Create a containersection of the Docker Remote API and the --ulimit
option to docker run. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the
nofile
resource limit parameter which Fargate overrides. Thenofile
resource limit sets a restriction on the number of open files that a container can use. The defaultnofile
soft limit is1024
and hard limit is4096
. You can set the values of both limits up to1048576
.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
Note This parameter is not supported for Windows containers.
"ulimits": [ { "name": "core"|"cpu"|"data"|"fsize"|"locks"|"memlock"|"msgqueue"|"nice"|"nofile"|"nproc"|"rss"|"rtprio"|"rttime"|"sigpending"|"stack", "softLimit": integer, "hardLimit": integer } ... ]
name
-
Type: string
Valid values:
"core" | "cpu" | "data" | "fsize" | "locks" | "memlock" | "msgqueue" | "nice" | "nofile" | "nproc" | "rss" | "rtprio" | "rttime" | "sigpending" | "stack"
Required: yes, when
ulimits
are usedThe
type
of theulimit
. hardLimit
-
Type: integer
Required: yes, when
ulimits
are usedThe hard limit for the
ulimit
type. softLimit
-
Type: integer
Required: yes, when
ulimits
are usedThe soft limit for the
ulimit
type.
Docker labels
dockerLabels
-
Type: string to string map
Required: no
A key/value map of labels to add to the container. This parameter maps to
Labels
in the Create a containersection of the Docker Remote API and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
"dockerLabels": {"string": "string" ...}
Other container definition parameters
The following container definition parameters can be used when registering task definitions in the Amazon ECS console by using the Configure via JSON option. For more information, see Creating a task definition using the console.
Topics
Linux parameters
linuxParameters
-
Type: LinuxParameters object
Required: no
Linux-specific options that are applied to the container, such as KernelCapabilities.
Note This parameter isn't supported for Windows containers.
"linuxParameters": { "capabilities": { "add": ["string", ...], "drop": ["string", ...] } }
capabilities
-
Type: KernelCapabilities object
Required: no
The Linux capabilities for the container that are dropped from the default configuration provided by Docker. For more information about the default capabilities and the other available capabilities, see Runtime privilege and Linux capabilities
in the Docker run reference. For more information about these Linux capabilities, see the capabilities(7) Linux manual page. add
-
Type: string array
Valid values:
"SYS_PTRACE"
Required: no
The Linux capabilities for the container to add to the default configuration that's provided by Docker. This parameter maps to
CapAdd
in the Create a containersection of the Docker Remote API and the --cap-add
option to docker run. drop
-
Type: string array
Valid values:
"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
Required: no
The Linux capabilities for the container to remove from the default configuration that's provided by Docker. This parameter maps to
CapDrop
in the Create a containersection of the Docker Remote API and the --cap-drop
option to docker run.
initProcessEnabled
-
Run an
init
process inside the container that forwards signals and reaps processes. This parameter maps to the--init
option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance.
Container dependency
dependsOn
-
Type: Array of ContainerDependency objects
Required: no
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. For an example, see Example: Container dependency.
Note If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
For Amazon ECS tasks that are hosted on Fargate, this parameter requires that the task or service uses platform version
1.3.0
or later (Linux) or1.0.0
(Windows)."dependsOn": [ { "containerName": "
string
", "condition": "string
" } ]containerName
-
Type: String
Required: Yes
The container name that must meet the specified condition.
condition
-
Type: String
Required: Yes
The dependency condition of the container. The following are the available conditions and their behavior:
-
START
– This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. -
COMPLETE
– This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for non-essential containers that run a script and then exit. This condition can't be set on an essential container. -
SUCCESS
– This condition is the same asCOMPLETE
, but it also requires that the container exits with azero
status. This condition can't be set on an essential container. -
HEALTHY
– This condition validates that the dependent container passes its container health check before permitting other containers to start. This requires that the dependent container has health checks configured in the task definition. This condition is confirmed only at task startup.
-
Container timeouts
startTimeout
-
Type: Integer
Required: no
Example values:
120
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
For example, you specify two containers in a task definition with
containerA
having a dependency oncontainerB
reaching aCOMPLETE
,SUCCESS
, orHEALTHY
status. If astartTimeout
value is specified forcontainerB
and it doesn't reach the desired status within that time, thencontainerA
doesn't start.Note If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
For Amazon ECS tasks that are hosted on Fargate, this parameter requires that the task or service uses platform version
1.3.0
or later (Linux). stopTimeout
-
Type: Integer
Required: no
Example values:
120
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks that use the Fargate launch type, the task or service requires platform version 1.3.0 or later (Linux) or 1.0.0 or later (for Windows). The max stop timeout value is 120 seconds. However, if the parameter isn't specified, the default value of 30 seconds is used.
System controls
systemControls
-
Type: SystemControl object
Required: no
A list of namespaced kernel parameters to set in the container. This parameter maps to
Sysctls
in the Create a containersection of the Docker Remote API and the --sysctl
option to docker run. We do not recommend that you specify network-related
systemControls
parameters for multiple containers in a single task that also uses either theawsvpc
orhost
network mode for the following reasons:-
For tasks that use the
awsvpc
network mode, if you setsystemControls
for any container, it applies to all containers in the task. If you set differentsystemControls
for multiple containers in a single task, the container that's started last determines whichsystemControls
take effect. -
For tasks that use the
host
network mode, the network namespacesystemControls
aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
-
For tasks that use the
host
IPC mode, IPC namespacesystemControls
aren't supported. -
For tasks that use the
task
IPC mode, IPC namespacesystemControls
values applies to all containers within a task.
Note This parameter is not supported for Windows containers or tasks using the Fargate launch type.
"systemControls": [ { "namespace":"
string
", "value":"string
" } ]namespace
-
Type: String
Required: no
The namespaced kernel parameter to set a
value
for.Valid IPC namespace values:
"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"
, as well as Sysctls beginning with"fs.mqueue.*"
Valid network namespace values: Sysctls beginning with
"net.*"
value
-
Type: String
Required: no
The value for the namespaced kernel parameter that's specified in
namespace
.
-
Interactive
interactive
-
Type: Boolean
Required: no
When this parameter is
true
, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps toOpenStdin
in the Create a containersection of the Docker Remote API and the --interactive
option to docker run.
Pseudo terminal
pseudoTerminal
-
Type: Boolean
Required: no
When this parameter is
true
, a TTY is allocated. This parameter maps toTty
in the Create a containersection of the Docker Remote API and the --tty
option to docker run.
Proxy configuration
proxyConfiguration
-
Type: ProxyConfiguration object
Required: no
The configuration details for the App Mesh proxy.
For tasks that use the Fargate launch type, this feature requires that the task or service uses platform version 1.3.0 or later.
Note This parameter is not supported for Windows containers.
"proxyConfiguration": { "type": "APPMESH", "containerName": "
string
", "properties": [ { "name": "string
", "value": "string
" } ] }type
-
Type: String
Value values:
APPMESH
Required: No
The proxy type. The only supported value is
APPMESH
. containerName
-
Type: String
Required: Yes
The name of the container that serves as the App Mesh proxy.
properties
-
Type: Array of KeyValuePair objects
Required: No
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
-
IgnoredUID
– (Required) The user ID (UID) of the proxy container as defined by theuser
parameter in a container definition. This is used to ensure the proxy ignores its own traffic. IfIgnoredGID
is specified, this field can be empty. -
IgnoredGID
– (Required) The group ID (GID) of the proxy container as defined by theuser
parameter in a container definition. This is used to ensure the proxy ignores its own traffic. IfIgnoredUID
is specified, this field can be empty. -
AppPorts
– (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to theProxyIngressPort
andProxyEgressPort
. -
ProxyIngressPort
– (Required) Specifies the port that incoming traffic to theAppPorts
is directed to. -
ProxyEgressPort
– (Required) Specifies the port that outgoing traffic from theAppPorts
is directed to. -
EgressIgnoredPorts
– (Required) The outbound traffic going to these specified ports is ignored and not redirected to theProxyEgressPort
. It can be an empty list. -
EgressIgnoredIPs
– (Required) The outbound traffic going to these specified IP addresses is ignored and not redirected to theProxyEgressPort
. It can be an empty list.
-
Volumes
When you register a task definition, you can optionally specify a list of volumes to be passed to the Docker daemon on a container instance, which then becomes available for access by other containers on the same container instance.
For more information, see Using data volumes in tasks.
The following parameters are allowed in a container definition.
name
-
Type: String
Required: No
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. This name is referenced in the
sourceVolume
parameter of container definitionmountPoints
object. efsVolumeConfiguration
-
Type: Object
Required: No
This parameter is specified when using Amazon EFS volumes.
fileSystemId
-
Type: String
Required: Yes
The Amazon EFS file system ID to use.
rootDirectory
-
Type: String
Required: No
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying
/
will have the same effect as omitting this parameter.Important If an EFS access point is specified in the
authorizationConfig
, the root directory parameter must either be omitted or set to/
which will enforce the path set on the EFS access point. transitEncryption
-
Type: String
Valid values:
ENABLED
|DISABLED
Required: No
Whether or not to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of
DISABLED
is used. For more information, see Encrypting Data in Transit in the Amazon Elastic File System User Guide. transitEncryptionPort
-
Type: Integer
Required: No
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS Mount Helper in the Amazon Elastic File System User Guide.
authorizationConfig
-
Type: Object
Required: No
The authorization configuration details for the Amazon EFS file system.
accessPointId
-
Type: String
Required: No
The access point ID to use. If an access point is specified, the root directory value in the
efsVolumeConfiguration
must either be omitted or set to/
which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be enabled in theEFSVolumeConfiguration
. For more information, see Working with Amazon EFS Access Points in the Amazon Elastic File System User Guide. iam
-
Type: String
Valid values:
ENABLED
|DISABLED
Required: No
Whether or not to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the
EFSVolumeConfiguration
. If this parameter is omitted, the default value ofDISABLED
is used. For more information, see IAM Roles for Tasks.
Tags
When you register a task definition, you can optionally specify metadata tags that are applied to the task definition. Tags help you categorize and organize your task definition. Each tag consists of a key and an optional value. You define both of them. For more information, see Tagging your Amazon ECS resources.
Don't add personally identifiable information or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including billing. Tags aren't intended to be used for private or sensitive data.
The following parameters are allowed in a tag object.
key
-
Type: string
Required: no
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value
-
Type: string
Required: no
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
Other task definition parameters
The following task definition parameters can be used when registering task definitions in the Amazon ECS console by using the Configure via JSON option. For more information, see Creating a task definition using the console.
Ephemeral storage
ephemeralStorage
-
Type: Object
Required: No
The amount of ephemeral storage (in GB) to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks that are hosted on AWS Fargate. For more information, see Bind mounts.
Note This parameter is only supported for tasks that are hosted on AWS Fargate using platform version
1.4.0
or later (Linux). This isn't supported for Windows containers on Fargate.
IPC mode
ipcMode
-
Type: String
Required: No
The IPC resource namespace to use for the containers in the task. The valid values are
host
,task
, ornone
. Ifhost
is specified, then all the containers that are within the tasks that specified thehost
IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. Iftask
is specified, all the containers that are within the specified task share the same IPC resources. Ifnone
is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the value for ipcMode is set toshareable
. For more information, see IPC settingsin the Docker run reference. If the
host
IPC mode is used, there's a heightened risk of undesired IPC namespace exposure. For more information, see Docker security. If you're setting namespaced kernel parameters that use
systemControls
for the containers in the task, the following applies to your IPC resource namespace. For more information, see System controls.-
For tasks that use the
host
IPC mode, IPC namespace that's relatedsystemControls
aren't supported. -
For tasks that use the
task
IPC mode,systemControls
that relate to the IPC namespace apply to all containers within a task.
-
This parameter is not supported for Windows containers or tasks using the Fargate launch type.
PID mode
pidMode
-
Type: String
Required: No
The process namespace to use for the containers in the task. The valid values are
host
ortask
. Ifhost
is specified, all containers within the tasks that specified thehost
PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. Iftask
is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace. For more information, see PID settingsin the Docker run reference. If the
host
PID mode is used, there's a heightened risk of undesired process namespace exposure. For more information, see Docker security.
This parameter is not supported for Windows containers or tasks using the Fargate launch type.