[ aws . ecs ]

register-daemon-task-definition

Description

Registers a new daemon task definition from the supplied family and containerDefinitions . Optionally, you can add data volumes to your containers with the volumes parameter. For more information, see Daemon task definitions in the Amazon Elastic Container Service Developer Guide .

A daemon task definition is a template that describes the containers that form a daemon. Daemons deploy cross-cutting software agents such as security monitoring, telemetry, and logging across your Amazon ECS infrastructure.

Each time you call RegisterDaemonTaskDefinition , a new revision of the daemon task definition is created. You can’t modify a revision after you register it.

See also: AWS API Documentation

Synopsis

  register-daemon-task-definition
--family <value>
[--task-role-arn <value>]
[--execution-role-arn <value>]
--container-definitions <value>
[--cpu <value>]
[--memory <value>]
[--volumes <value>]
[--tags <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
[--cli-binary-format <value>]
[--no-cli-pager]
[--cli-auto-prompt]
[--no-cli-auto-prompt]
[--cli-error-format <value>]

Options

--family (string) [required]

You must specify a family for a daemon task definition. This family is used as a name for your daemon task definition. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.

--task-role-arn (string)

The short name or full Amazon Resource Name (ARN) of the IAM role that containers in this daemon task can assume. All containers in this daemon task are granted the permissions that are specified in this role.

--execution-role-arn (string)

The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. The task execution role is required for daemon tasks that pull container images from Amazon ECR or send container logs to CloudWatch.

--container-definitions (list) [required]

A list of container definitions in JSON format that describe the containers that make up your daemon task.

(structure)

A container definition for a daemon task. Daemon container definitions describe the containers that run as part of a daemon task on container instances managed by capacity providers.

name -> (string)

The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.

image -> (string) [required]

The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either `` repository-url /image :tag `` or `` repository-url /image @*digest* `` .

memory -> (integer)

The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.

memoryReservation -> (integer)

The soft limit (in MiB) of memory to reserve for the container.

repositoryCredentials -> (structure)

The private repository authentication credentials to use.

credentialsParameter -> (string) [required]

The Amazon Resource Name (ARN) of the secret containing the private repository credentials.

Note

When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you’re launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret.

healthCheck -> (structure)

The container health check command and associated configuration parameters for the container.

command -> (list) [required]

A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container’s default shell.

When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.

[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]

You don’t include the double quotes and brackets when you use the Amazon Web Services Management Console.

CMD-SHELL, curl -f http://localhost/ || exit 1

An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.

(string)

interval -> (integer)

The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command .

timeout -> (integer)

The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command .

retries -> (integer)

The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command .

startPeriod -> (integer)

The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command .

Note

If a health check succeeds within the startPeriod , then the container is considered healthy and any subsequent failures count toward the maximum number of retries.

cpu -> (integer)

The number of cpu units reserved for the container.

essential -> (boolean)

If the essential parameter of a container is marked as true , and that container fails or stops for any reason, all other containers that are part of the task are stopped.

entryPoint -> (list)

The entry point that’s passed to the container.

(string)

command -> (list)

The command that’s passed to the container.

(string)

workingDirectory -> (string)

The working directory to run commands inside the container in.

environmentFiles -> (list)

A list of files containing the environment variables to pass to a container.

(structure)

A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.

If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they’re processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide .

Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.

You must use the following platforms for the Fargate launch type:

  • Linux platform version 1.4.0 or later.
  • Windows platform version 1.0.0 or later.

Consider the following when using the Fargate launch type:

  • The file is handled like a native Docker env-file.
  • There is no support for shell escape handling.
  • The container entry point interperts the VARIABLE values.

value -> (string) [required]

The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.

type -> (string) [required]

The file type to use. Environment files are objects in Amazon S3. The only supported value is s3 .

Possible values:

  • s3

environment -> (list)

The environment variables to pass to a container.

(structure)

A key-value pair object.

name -> (string)

The name of the key-value pair. For environment variables, this is the name of the environment variable.

value -> (string)

The value of the key-value pair. For environment variables, this is the value of the environment variable.

secrets -> (list)

The secrets to pass to the container.

(structure)

An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:

  • To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
  • To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.

For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .

name -> (string) [required]

The name of the secret.

valueFrom -> (string) [required]

The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.

For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .

Note

If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.

readonlyRootFilesystem -> (boolean)

When this parameter is true, the container is given read-only access to its root file system.

mountPoints -> (list)

The mount points for data volumes in your container.

(structure)

The details for a volume mount point that’s used in a container definition.

sourceVolume -> (string)

The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume .

containerPath -> (string)

The path on the container to mount the host volume at.

readOnly -> (boolean)

If this value is true , the container has read-only access to the volume. If this value is false , then the container can write to the volume. The default value is false .

logConfiguration -> (structure)

The log configuration specification for the container.

logDriver -> (string) [required]

The log driver to use for the container.

For tasks on Fargate, the supported log drivers are awslogs , splunk , and awsfirelens .

For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs , fluentd , gelf , json-file , journald , syslog , splunk , and awsfirelens .

For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide .

For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner .

Note

If you have a custom driver that isn’t listed, you can fork the Amazon ECS container agent project that’s available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don’t currently provide support for running modified copies of this software.

Possible values:

  • json-file
  • syslog
  • journald
  • gelf
  • fluentd
  • awslogs
  • splunk
  • awsfirelens

options -> (map)

The configuration options to send to the log driver.

The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:

awslogs-create-group

Required: No

Specify whether you want the log group to be created automatically. If this option isn’t specified, it defaults to false .

Note

Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group .

awslogs-region

Required: Yes

Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they’re all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.

awslogs-group

Required: Yes

Make sure to specify a log group that the awslogs log driver sends its log streams to.

awslogs-stream-prefix

Required: Yes, when using Fargate.Optional when using EC2.

Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id .

If you don’t specify a prefix with this option, then the log stream is named after the container ID that’s assigned by the Docker daemon on the container instance. Because it’s difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.

For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.

You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.

awslogs-datetime-format

Required: No

This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.

One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.

For more information, see awslogs-datetime-format .

You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.

Note

Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.

awslogs-multiline-pattern

Required: No

This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.

For more information, see awslogs-multiline-pattern .

This option is ignored if awslogs-datetime-format is also configured.

You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.

Note

Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.

The following options apply to all supported log drivers.

mode

Required: No

Valid values: non-blocking | blocking

This option defines the delivery mode of log messages from the container to the log driver specified using logDriver . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.

If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.

If you use the non-blocking mode, the container’s logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the ``awslogs` container log driver <http://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`__ .

You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don’t specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide .

Note

On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:

  • Set the mode option in your container definition’s logConfiguration as blocking .
  • Set the defaultLogDriverMode account setting to blocking .

max-buffer-size

Required: No

Default value: 10m

When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that’s used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.

To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url .

When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.

Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream .

When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream .

When you export logs to Amazon OpenSearch Service, you can specify options like Name , Host (OpenSearch Service endpoint without protocol), Port , Index , Type , Aws_auth , Aws_region , Suppress_Type_Name , and tls . For more information, see Under the hood: FireLens for Amazon ECS Tasks .

When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region , total_file_size , upload_timeout , and use_put_object as options.

This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'

key -> (string)

value -> (string)

secretOptions -> (list)

The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .

(structure)

An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:

  • To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
  • To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.

For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .

name -> (string) [required]

The name of the secret.

valueFrom -> (string) [required]

The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.

For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .

Note

If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.

firelensConfiguration -> (structure)

The FireLens configuration for the container. This is used to specify and configure a log router for container logs.

type -> (string) [required]

The log router to use. The valid values are fluentd or fluentbit .

Possible values:

  • fluentd
  • fluentbit

options -> (map)

The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"} . For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide .

Note

Tasks hosted on Fargate only support the file configuration file type.

key -> (string)

value -> (string)

privileged -> (boolean)

When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user).

user -> (string)

The user to use inside the container.

ulimits -> (list)

A list of ulimits to set in the container.

(structure)

The ulimit settings to pass to the container.

Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535 .

You can specify the ulimit settings for a container in a task definition.

name -> (string) [required]

The type of the ulimit .

Possible values:

  • core
  • cpu
  • data
  • fsize
  • locks
  • memlock
  • msgqueue
  • nice
  • nofile
  • nproc
  • rss
  • rtprio
  • rttime
  • sigpending
  • stack

softLimit -> (integer) [required]

The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit .

hardLimit -> (integer) [required]

The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit .

linuxParameters -> (structure)

Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.

capabilities -> (structure)

The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.

add -> (list)

The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.

Note

Tasks launched on Fargate only support adding the SYS_PTRACE kernel capability.

Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"

(string)

drop -> (list)

The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.

Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"

(string)

devices -> (list)

Any host devices to expose to the container.

(structure)

An object representing a container instance host device.

hostPath -> (string) [required]

The path for the device on the host container instance.

containerPath -> (string)

The path inside the container at which to expose the host device.

permissions -> (list)

The explicit permissions to provide to the container for the device. By default, the container has permissions for read , write , and mknod for the device.

(string)

Possible values:

  • read
  • write
  • mknod

initProcessEnabled -> (boolean)

Run an init process inside the container that forwards signals and reaps processes.

tmpfs -> (list)

The container path, mount options, and size (in MiB) of the tmpfs mount.

(structure)

The container path, mount options, and size of the tmpfs mount.

containerPath -> (string) [required]

The absolute file path where the tmpfs volume is to be mounted.

size -> (integer) [required]

The maximum size (in MiB) of the tmpfs volume.

mountOptions -> (list)

The list of tmpfs volume mount options.

Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"

(string)

dependsOn -> (list)

The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.

(structure)

The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.

Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you’re using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init . For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .

Note

For tasks that use the Fargate launch type, the task or service requires the following platforms:

  • Linux platform version 1.3.0 or later.
  • Windows platform version 1.0.0 or later.

For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide .

containerName -> (string) [required]

The name of a container.

condition -> (string) [required]

The dependency condition of the container. The following are the available conditions and their behavior:

  • START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
  • COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can’t be set on an essential container.
  • SUCCESS - This condition is the same as COMPLETE , but it also requires that the container exits with a zero status. This condition can’t be set on an essential container.
  • HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.

Possible values:

  • START
  • COMPLETE
  • SUCCESS
  • HEALTHY

startTimeout -> (integer)

Time duration (in seconds) to wait before giving up on resolving dependencies for a container.

stopTimeout -> (integer)

Time duration (in seconds) to wait before the container is forcefully killed if it doesn’t exit normally on its own.

systemControls -> (list)

A list of namespaced kernel parameters to set in the container.

(structure)

A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.

We don’t recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:

  • For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that’s started last determines which systemControls take effect.
  • For tasks that use the host network mode, the network namespace systemControls aren’t supported.

If you’re setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode .

  • For tasks that use the host IPC mode, IPC namespace systemControls aren’t supported.
  • For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.

Note

This parameter is not supported for Windows containers.

Note

This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version 1.4.0 or later (Linux). This isn’t supported for Windows containers on Fargate.

namespace -> (string)

The namespaced kernel parameter to set a value for.

value -> (string)

The namespaced kernel parameter to set a value for.

Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced" , and Sysctls that start with "fs.mqueue.*"

Valid network namespace values: Sysctls that start with "net.*" . Only namespaced Sysctls that exist within the container starting with “net.* are accepted.

All of these values are supported by Fargate.

interactive -> (boolean)

When this parameter is true , you can deploy containerized applications that require stdin or a tty to be allocated.

pseudoTerminal -> (boolean)

When this parameter is true , a TTY is allocated.

restartPolicy -> (structure)

The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.

enabled -> (boolean) [required]

Specifies whether a restart policy is enabled for the container.

ignoredExitCodes -> (list)

A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.

(integer)

restartAttemptPeriod -> (integer)

A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn’t able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.

JSON Syntax:

[
  {
    "name": "string",
    "image": "string",
    "memory": integer,
    "memoryReservation": integer,
    "repositoryCredentials": {
      "credentialsParameter": "string"
    },
    "healthCheck": {
      "command": ["string", ...],
      "interval": integer,
      "timeout": integer,
      "retries": integer,
      "startPeriod": integer
    },
    "cpu": integer,
    "essential": true|false,
    "entryPoint": ["string", ...],
    "command": ["string", ...],
    "workingDirectory": "string",
    "environmentFiles": [
      {
        "value": "string",
        "type": "s3"
      }
      ...
    ],
    "environment": [
      {
        "name": "string",
        "value": "string"
      }
      ...
    ],
    "secrets": [
      {
        "name": "string",
        "valueFrom": "string"
      }
      ...
    ],
    "readonlyRootFilesystem": true|false,
    "mountPoints": [
      {
        "sourceVolume": "string",
        "containerPath": "string",
        "readOnly": true|false
      }
      ...
    ],
    "logConfiguration": {
      "logDriver": "json-file"|"syslog"|"journald"|"gelf"|"fluentd"|"awslogs"|"splunk"|"awsfirelens",
      "options": {"string": "string"
        ...},
      "secretOptions": [
        {
          "name": "string",
          "valueFrom": "string"
        }
        ...
      ]
    },
    "firelensConfiguration": {
      "type": "fluentd"|"fluentbit",
      "options": {"string": "string"
        ...}
    },
    "privileged": true|false,
    "user": "string",
    "ulimits": [
      {
        "name": "core"|"cpu"|"data"|"fsize"|"locks"|"memlock"|"msgqueue"|"nice"|"nofile"|"nproc"|"rss"|"rtprio"|"rttime"|"sigpending"|"stack",
        "softLimit": integer,
        "hardLimit": integer
      }
      ...
    ],
    "linuxParameters": {
      "capabilities": {
        "add": ["string", ...],
        "drop": ["string", ...]
      },
      "devices": [
        {
          "hostPath": "string",
          "containerPath": "string",
          "permissions": ["read"|"write"|"mknod", ...]
        }
        ...
      ],
      "initProcessEnabled": true|false,
      "tmpfs": [
        {
          "containerPath": "string",
          "size": integer,
          "mountOptions": ["string", ...]
        }
        ...
      ]
    },
    "dependsOn": [
      {
        "containerName": "string",
        "condition": "START"|"COMPLETE"|"SUCCESS"|"HEALTHY"
      }
      ...
    ],
    "startTimeout": integer,
    "stopTimeout": integer,
    "systemControls": [
      {
        "namespace": "string",
        "value": "string"
      }
      ...
    ],
    "interactive": true|false,
    "pseudoTerminal": true|false,
    "restartPolicy": {
      "enabled": true|false,
      "ignoredExitCodes": [integer, ...],
      "restartAttemptPeriod": integer
    }
  }
  ...
]

--cpu (string)

The number of CPU units used by the daemon task. It can be expressed as an integer using CPU units (for example, 1024 ).

--memory (string)

The amount of memory (in MiB) used by the daemon task. It can be expressed as an integer using MiB (for example, 1024 ).

--volumes (list)

A list of volume definitions in JSON format that containers in your daemon task can use.

(structure)

A data volume definition for a daemon task.

name -> (string)

The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.

host -> (structure)

The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it’s stored.

sourcePath -> (string)

When the host parameter is used, specify a sourcePath to declare the path on the host container instance that’s presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn’t exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.

If you’re using the Fargate launch type, the sourcePath parameter is not supported.

Shorthand Syntax:

name=string,host={sourcePath=string} ...

JSON Syntax:

[
  {
    "name": "string",
    "host": {
      "sourcePath": "string"
    }
  }
  ...
]

--tags (list)

The metadata that you apply to the daemon task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them.

The following basic restrictions apply to tags:

  • Maximum number of tags per resource - 50
  • For each resource, each tag key must be unique, and each tag key can have only one value.
  • Maximum key length - 128 Unicode characters in UTF-8
  • Maximum value length - 256 Unicode characters in UTF-8
  • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
  • Tag keys and values are case-sensitive.
  • Do not use aws: , AWS: , or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.

Constraints:

  • min: 0
  • max: 50

(structure)

The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.

The following basic restrictions apply to tags:

  • Maximum number of tags per resource - 50
  • For each resource, each tag key must be unique, and each tag key can have only one value.
  • Maximum key length - 128 Unicode characters in UTF-8
  • Maximum value length - 256 Unicode characters in UTF-8
  • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
  • Tag keys and values are case-sensitive.
  • Do not use aws: , AWS: , or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.

key -> (string)

One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.

Constraints:

  • min: 1
  • max: 128
  • pattern: ([\p{L}\p{Z}\p{N}_.:/=+\-@]*)

value -> (string)

The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).

Constraints:

  • min: 0
  • max: 256
  • pattern: ([\p{L}\p{Z}\p{N}_.:/=+\-@]*)

Shorthand Syntax:

key=string,value=string ...

JSON Syntax:

[
  {
    "key": "string",
    "value": "string"
  }
  ...
]

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.

Global Options

--debug (boolean)

Turn on debug logging.

--endpoint-url (string)

Override command’s default URL with the given URL.

--no-verify-ssl (boolean)

By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.

--no-paginate (boolean)

Disable automatic pagination. If automatic pagination is disabled, the AWS CLI will only make one call, for the first page of results.

--output (string)

The formatting style for command output.

  • json
  • text
  • table
  • yaml
  • yaml-stream
  • off

--query (string)

A JMESPath query to use in filtering the response data.

--profile (string)

Use a specific profile from your credential file.

--region (string)

The region to use. Overrides config/env settings.

--version (string)

Display the version of this tool.

--color (string)

Turn on/off color output.

  • on
  • off
  • auto

--no-sign-request (boolean)

Do not sign requests. Credentials will not be loaded if this argument is provided.

--ca-bundle (string)

The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.

--cli-read-timeout (int)

The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.

--cli-connect-timeout (int)

The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.

--cli-binary-format (string)

The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb:// will always be treated as binary and use the file contents directly regardless of the cli-binary-format setting. When using file:// the file contents will need to properly formatted for the configured cli-binary-format.

  • base64
  • raw-in-base64-out

--no-cli-pager (boolean)

Disable cli pager for output.

--cli-auto-prompt (boolean)

Automatically prompt for CLI input parameters.

--no-cli-auto-prompt (boolean)

Disable automatically prompt for CLI input parameters.

--cli-error-format (string)

The formatting style for error output. By default, errors are displayed in enhanced format.

  • legacy
  • json
  • yaml
  • text
  • table
  • enhanced

Output

daemonTaskDefinitionArn -> (string)

The full Amazon Resource Name (ARN) of the registered daemon task definition.