Job definition parameters
Job definitions are split into four basic parts: the job definition name, the type of the job definition, parameter substitution placeholder defaults, and the container properties for the job.
Contents
Job definition name
jobDefinitionName
-
When you register a job definition, you specify a name. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. The first job definition that's registered with that name is given a revision of 1. Any subsequent job definitions that are registered with that name are given an incremental revision number.
Type: String
Required: Yes
Type
type
-
When you register a job definition, you specify the type of job. If the job runs on Fargate resources, then
multinode
isn't supported. For more information about multi-node parallel jobs, see Creating a multi-node parallel job definition.Type: String
Valid values:
container
|multinode
Required: Yes
Parameters
parameters
-
When you submit a job, you can specify parameters that should replace the placeholders or override the default job definition parameters. Parameters in job submission requests take precedence over the defaults in a job definition. This allows you to use the same job definition for multiple jobs that use the same format, and programmatically change values in the command at submission time.
Type: String to string map
Required: No
When you register a job definition, you can use parameter substitution placeholders in the
command
field of a job's container properties. For example:"command": [ "ffmpeg", "-i", "Ref::inputfile", "-c", "Ref::codec", "-o", "Ref::outputfile" ]
In the above example, there are
Ref::inputfile
,Ref::codec
, andRef::outputfile
parameter substitution placeholders in the command. Theparameters
object in the job definition allows you to set default values for these placeholders. For example, to set a default for theRef::codec
placeholder, you specify the following in the job definition:"parameters" : {"codec" : "mp4"}
When this job definition is submitted to run, the
Ref::codec
argument in the container's command is replaced with the default value,mp4
.
Platform capabilities
platformCapabilities
-
The platform capabilities required by the job definition. If no value is specified, it defaults to
EC2
. Jobs run on Fargate resources specifyFARGATE
.Type: String
Valid values:
EC2
|FARGATE
Required: No
Propagate tags
propagateTags
-
Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job's moved to the
FAILED
state.Type: Boolean
Required: No
Container properties
When you register a job definition, you must specify a list of container properties that are passed to the Docker daemon on a container instance when the job is placed. The following container properties are allowed in a job definition. For single-node jobs, these container properties are set at the job definition level. For multi-node parallel jobs, container properties are set in the Node properties level, for each node group.
command
-
The command that's passed to the container. This parameter maps to
Cmd
in the Create a containersection of the Docker Remote API and the COMMAND
parameter to docker run. For more information about the Docker CMD
parameter, see https://docs.docker.com/engine/reference/builder/#cmd. "command": ["
string
", ...]Type: String array
Required: No
environment
-
The environment variables to pass to a container. This parameter maps to
Env
in the Create a containersection of the Docker Remote API and the --env
option to docker run. Important We don't recommend using plaintext environment variables for sensitive information, such as credential data.
Note Environment variables must not start with
AWS_BATCH
; this naming convention is reserved for variables that are set by the AWS Batch service.Type: Array of key-value pairs
Required: No
name
-
The name of the environment variable.
Type: String
Required: Yes, when
environment
is used. value
-
The value of the environment variable.
Type: String
Required: Yes, when
environment
is used.
"environment" : [ { "name" : "
envName1
", "value" : "envValue1
" }, { "name" : "envName2
", "value" : "envValue2
" } ] executionRoleArn
-
When you register a job definition, you can specify an IAM role. The role provides the Amazon ECS container agent with permissions to call the API actions that are specified in its associated policies on your behalf. Jobs that are running on Fargate resources must provide an execution role. For more information, see AWS Batch execution IAM role.
Type: String
Required: No
fargatePlatformConfiguration
-
The platform configuration for jobs that are running on Fargate resources. Jobs that are running on EC2 resources must not specify this parameter.
Type: FargatePlatformConfiguration object
Required: No
platformVersion
-
The AWS Fargate platform version use for the jobs, or
LATEST
to use a recent, approved version of the AWS Fargate platform.Type: String
Default:
LATEST
Required: No
image
-
The image used to start a job. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. You can also specify other repositories with
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps torepository-url
/image
:tag
Image
in the Create a containersection of the Docker Remote API and the IMAGE
parameter of docker run. Note Docker image architecture must match the processor architecture of the compute resources that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based compute resources.
-
Images in Amazon ECR repositories use the full
registry/repository:tag
naming convention. For example,aws_account_id
.dkr.ecr.region
.amazonaws.com/
my-web-app
:latest
-
Images in official repositories on Docker Hub use a single name (for example,
ubuntu
ormongo
). -
Images in other repositories on Docker Hub are qualified with an organization name (for example,
amazon/amazon-ecs-agent
). -
Images in other online repositories are qualified further by a domain name (for example,
quay.io/assemblyline/ubuntu
).
Type: String
Required: Yes
-
instanceType
-
The instance type to use for a multi-node parallel job. All node groups in a multi-node parallel job must use the same instance type. This parameter isn't valid for single-node container jobs or for jobs run on Fargate resources.
Type: String
Required: No
jobRoleArn
-
When you register a job definition, you can specify an IAM role. The role provides the job container with permissions to call the API actions that are specified in its associated policies on your behalf. For more information, see IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
Type: String
Required: No
linuxParameters
-
Linux-specific modifications that are applied to the container, such as details for device mappings.
"linuxParameters": { "devices": [ { "hostPath": "
string
", "containerPath": "string
", "permissions": [ "READ", "WRITE", "MKNOD" ] } ], "initProcessEnabled":true|false
, "sharedMemorySize": 0, "tmpfs": [ { "containerPath": "string
", "size":integer
, "mountOptions": [ "string
" ] } ], "maxSwap":integer
, "swappiness":integer
}Type: LinuxParameters object
Required: No
devices
-
List of devices mapped into the container. This parameter maps to
Devices
in the Create a containersection of the Docker Remote API and the --device
option to docker run. Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Type: Array of Device objects
Required: No
hostPath
-
Path at which the device available in the host container instance.
Type: String
Required: Yes
containerPath
-
Path at which the device is exposed in the container. If this isn't specified the device is exposed at the same path as the host path.
Type: String
Required: No
permissions
-
Permissions for the device in the container. If this isn't specified the permissions are set to
READ
,WRITE
, andMKNOD
.Type: Array of strings
Required: No
Valid values:
READ
|WRITE
|MKNOD
initProcessEnabled
-
If true, run an
init
process inside the container that forwards signals and reaps processes. This parameter maps to the--init
option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep "Server API version"
Type: Boolean
Required: No
maxSwap
-
The total amount of swap memory (in MiB) a job can use. This parameter is translated to the
--memory-swap
option to docker runwhere the value is the sum of the container memory plus the maxSwap
value. For more information, see--memory-swap
detailsin the Docker documentation. If a
maxSwap
value of0
is specified, the container doesn't use swap. Accepted values are0
or any positive integer. If themaxSwap
parameter is omitted, the container uses the swap configuration for the container instance that it's running on. AmaxSwap
value must be set for theswappiness
parameter to be used.Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Type: Integer
Required: No
sharedMemorySize
-
The value for the size (in MiB) of the
/dev/shm
volume. This parameter maps to the--shm-size
option to docker run. Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Type: Integer
Required: No
swappiness
-
This allows you to tune a container's memory swappiness behavior. A
swappiness
value of0
will cause swapping to not happen unless absolutely necessary. Aswappiness
value of100
will cause pages to be swapped very aggressively. Accepted values are whole numbers between0
and100
. If theswappiness
parameter isn't specified, a default value of60
is used. If a value isn't specified formaxSwap
then this parameter is ignored. IfmaxSwap
is set to 0, the container doesn't use swap. This parameter maps to the--memory-swappiness
option to docker run. Consider the following when you use a per-container swap configuration.
-
Swap space must be enabled and allocated on the container instance for the containers to use.
Note The Amazon ECS optimized AMIs don't have swap enabled by default. You must enable swap on the instance to use this feature. For more information, see Instance Store Swap Volumes in the Amazon EC2 User Guide for Linux Instances or How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?
-
The swap space parameters are only supported for job definitions using EC2 resources.
-
If the
maxSwap
andswappiness
parameters are omitted from a job definition, each container will have a defaultswappiness
value of 60 and the total swap usage will be limited to two times the memory reservation of the container.
Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Type: Integer
Required: No
-
tmpfs
-
The container path, mount options, and size of the tmpfs mount.
Type: Array of Tmpfs objects
Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Required: No
containerPath
-
The absolute file path in the container where the tmpfs volume is mounted.
Type: String
Required: Yes
mountOptions
-
The list of tmpfs volume mount options.
Valid values: "
defaults
" | "ro
" | "rw
" | "suid
" | "nosuid
" | "dev
" | "nodev
" | "exec
" | "noexec
" | "sync
" | "async
" | "dirsync
" | "remount
" | "mand
" | "nomand
" | "atime
" | "noatime
" | "diratime
" | "nodiratime
" | "bind
" | "rbind
" | "unbindable
" | "runbindable
" | "private
" | "rprivate
" | "shared
" | "rshared
" | "slave
" | "rslave
" | "relatime
" | "norelatime
" | "strictatime
" | "nostrictatime
" | "mode
" | "uid
" | "gid
" | "nr_inodes
" | "nr_blocks
" | "mpol
"Type: Array of strings
Required: No
size
-
The size (in MiB) of the tmpfs volume.
Type: Integer
Required: Yes
logConfiguration
-
The log configuration specification for the job.
This parameter maps to
LogConfig
in the Create a containersection of the Docker Remote API and the --log-driver
option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be either configured on the container instance or on another log server to provide remote logging options. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation. Note AWS Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command:
sudo docker version | grep "Server API version"
"logConfiguration": { "devices": [ { "logDriver": "
string
", "options": { "optionName1
" : "optionValue1
", "optionName2
" : "optionValue2
" } "secretOptions": [ { "name" : "secretOptionName1
", "valueFrom" : "secretOptionArn1
" }, { "name" : "secretOptionName2
", "valueFrom" : "secretOptionArn2
" } ] } ] }Type: LogConfiguration object
Required: No
logDriver
-
The log driver to use for the job. By default, AWS Batch enables the
awslogs
log driver. The valid values listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.This parameter maps to
LogConfig
in the Create a containersection of the Docker Remote API and the --log-driver
option to docker run. By default, jobs use the same logging driver that the Docker daemon uses. However the job can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the job definition. If you want to specify another logging driver for a job, then the log system must be configured on the container instance in the compute environment. Or, alternatively, you should configure it on another log server to provide remote logging options. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation. Note AWS Batch currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers might be available in future releases of the Amazon ECS container agent.
The supported log drivers are
awslogs
,fluentd
,gelf
,json-file
,journald
,logentries
,syslog
, andsplunk
.Note Jobs that are running on Fargate resources are restricted to the
awslogs
andsplunk
log drivers.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command:
sudo docker version | grep "Server API version"
Note The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the
ECS_AVAILABLE_LOGGING_DRIVERS
environment variable. Otherwise, the containers placed on that instance can't use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.awslogs
-
Specifies the Amazon CloudWatch Logs logging driver. For more information, see Using the awslogs log driver and Amazon CloudWatch Logs logging driver
in the Docker documentation. fluentd
-
Specifies the Fluentd logging driver. For more information, including usage and options, see Fluentd logging driver
in the Docker documentation. gelf
-
Specifies the Graylog Extended Format (GELF) logging driver. For more information, including usage and options, see Graylog Extended Format logging driver
in the Docker documentation. journald
-
Specifies the journald logging driver. For more information, including usage and options, see Journald logging driver
in the Docker documentation. json-file
-
Specifies the JSON file logging driver. For more information, including usage and options, see JSON File logging driver
in the Docker documentation. splunk
-
Specifies the Splunk logging driver. For more information, including usage and options, see Splunk logging driver
in the Docker documentation. syslog
-
Specifies the syslog logging driver. For more information, including usage and options, see Syslog logging driver
in the Docker documentation.
Type: String
Required: Yes
Valid values:
awslogs
|fluentd
|gelf
|journald
|json-file
|splunk
|syslog
Note If you have a custom driver that's not listed earlier that you would like to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's available on GitHub
and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, Amazon Web Services doesn't currently support that are running modified copies of this software. options
-
Log configuration options to send to a log driver for the job.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance.
Type: String to string map
Required: No
secretOptions
-
An object representing the secret to pass to the log configuration. For more information, see Specifying sensitive data.
Type: object array
Required: No
name
-
The name of the log driver option to set in the job.
Type: String
Required: Yes
valueFrom
-
The ARN of the secret to expose to the log configuration of the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
Note If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
Type: String
Required: Yes
memory
-
This parameter is deprecated and not supported for jobs run on Fargate resources. Use
ResourceRequirement
instead. For jobs run on EC2 resources that aren't usingResourceRequirement
, the number of MiB of memory reserved for the job. For other jobs, seeresourceRequirements
. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps toMemory
in the Create a containersection of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. Note If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Compute Resource Memory Management.
Type: Integer
Required: Yes
mountPoints
-
The mount points for data volumes in your container. This parameter maps to
Volumes
in the Create a containersection of the Docker Remote API and the --volume
option to docker run. "mountPoints": [ { "sourceVolume": "
string
", "containerPath": "string
", "readOnly":true|false
} ]Type: Object array
Required: No
sourceVolume
-
The name of the volume to mount.
Type: String
Required: Yes, when
mountPoints
is used. containerPath
-
The path on the container at which to mount the host volume.
Type: String
Required: Yes, when
mountPoints
is used. readOnly
-
If this value is
true
, the container has read-only access to the volume. If this value isfalse
, then the container can write to the volume.Type: Boolean
Required: No
Default: False
networkConfiguration
-
The network configuration for jobs that are running on Fargate resources. Jobs that are running on EC2 resources must not specify this parameter.
"networkConfiguration": { "assignPublicIp": "string" }
Type: Object array
Required: No
assignPublicIp
-
Indicates whether the job should have a public IP address. This is required if the job needs outbound network access.
Type: String
Valid values:
ENABLED
|DISABLED
Required: No
Default:
DISABLED
privileged
-
When this parameter is true, the container is given elevated permissions on the host container instance (similar to the
root
user). This parameter maps toPrivileged
in the Create a containersection of the Docker Remote API and the --privileged
option to docker run. This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided, or specified as false. "privileged":
true|false
Type: Boolean
Required: No
readonlyRootFilesystem
-
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to
ReadonlyRootfs
in the Create a containersection of the Docker Remote API and the --read-only
option to docker run. "readonlyRootFilesystem":
true|false
Type: Boolean
Required: No
resourceRequirements
-
The type and amount of a resource to assign to a container. The supported resources include
GPU
,MEMORY
, andVCPU
."resourceRequirements" : [ { "type": "GPU", "value": "
number
" } ]Type: Object array
Required: No
type
-
The type of resource to assign to a container. The supported resources include
GPU
,MEMORY
, andVCPU
.Type: String
Required: Yes, when
resourceRequirements
is used. value
-
The quantity of the specified resource to reserve for the container. The values vary based on the
type
specified.- type="GPU"
-
The number of physical GPUs to reserve for the container. The number of GPUs reserved for all containers in a job shouldn't exceed the number of available GPUs on the compute resource that the job is launched on.
- type="MEMORY"
-
The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to
Memory
in the Create a containersection of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs; it must be specified for each node at least once. This parameter maps to Memory
in the Create a containersection of the Docker Remote API and the --memory
option to docker run. Note If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Compute Resource Memory Management.
For jobs that are running on Fargate resources, then
value
must match one of the supported values. Moreover, theVCPU
values must be one of the values supported for that memory value.VCPU
MEMORY
0.25 vCPU
512, 1024, and 2048 MiB
0.5 vCPU
1024, 2048, 3072, and 4096 MiB
1 vCPU
2048, 3072, 4096, 5120, 6144, 7168, and 8192 MiB
2 vCPU
4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, and 16384 MiB
4 vCPU
8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, and 30720 MiB
- type="VCPU"
-
The number of vCPUs reserved for the job. This parameter maps to
CpuShares
in the Create a containersection of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. For jobs that are running on EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. For jobs that are running on Fargate resources, then
value
must match one of the supported values and theMEMORY
values must be one of the values supported for that VCPU value. The supported values are 0.25, 0.5, 1, 2, and 4
Type: String
Required: Yes, when
resourceRequirements
is used.
secrets
-
The secrets for the job that are exposed as environment variables. For more information, see Specifying sensitive data.
"secrets": [ { "name": "
secretName1
", "valueFrom": "secretArn1
" }, { "name": "secretName2
", "valueFrom": "secretArn2
" } ... ]Type: Object array
Required: No
name
-
The name of the environment variable that contains the secret.
Type: String
Required: Yes, when
secrets
is used. valueFrom
-
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
Note If the SSM Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
Type: String
Required: Yes, when
secrets
is used.
ulimits
-
A list of
ulimits
values to set in the container. This parameter maps toUlimits
in the Create a containersection of the Docker Remote API and the --ulimit
option to docker run. "ulimits": [ { "name":
string
, "softLimit":integer
, "hardLimit":integer
} ... ]Type: Object array
Required: No
name
-
The
type
of theulimit
.Type: String
Required: Yes, when
ulimits
is used. hardLimit
-
The hard limit for the
ulimit
type.Type: Integer
Required: Yes, when
ulimits
is used. softLimit
-
The soft limit for the
ulimit
type.Type: Integer
Required: Yes, when
ulimits
is used.
user
-
The user name to use inside the container. This parameter maps to
User
in the Create a containersection of the Docker Remote API and the --user
option to docker run. "user": "
string
"Type: String
Required: No
vcpus
-
The number of vCPUs reserved for the container. This parameter maps to
CpuShares
in the Create a containersection of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. You must specify at least one vCPU. This is required but can be specified in several places for multi-node parallel (MNP) jobs; it must be specified for each node at least once. Jobs that are running on Fargate resources must specify the vCPU requirement for the job using resourceRequirements
. Other jobs can specify the vCPU requirement for the job usingresourceRequirements
.Type: Integer
Required: Yes
volumes
-
When you register a job definition, you can specify a list of volumes that are passed to the Docker daemon on a container instance. The following parameters are allowed in the container properties:
[ { "name": "
string
", "host": { "sourcePath": "string
" } } ]name
-
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. This name is referenced in the
sourceVolume
parameter of container definitionmountPoints
.Type: String
Required: No
host
-
The contents of the
host
parameter determine whether your data volume persists on the host container instance and where it is stored. If thehost
parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the container associated with it stops running.Note This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Type: Object
Required: No
sourcePath
-
The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon assigns a host path for you.
If the
host
parameter contains asourcePath
file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePath
value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.Type: String
Required: No
Node properties
nodeProperties
-
When you register a multi-node parallel job definition, you must specify a list of node properties. These node properties should define the number of nodes to use in your job, the main node index, and the different node ranges to use. If the job runs on Fargate resources, then you cannot specify
nodeProperties
. Rather, you should usecontainerProperties
instead. The following node properties are allowed in a job definition. For more information, see Multi-node Parallel Jobs.Type: NodeProperties object
Required: No
mainNode
-
Specifies the node index for the main node of a multi-node parallel job. This node index value must be fewer than the number of nodes.
Type: Integer
Required: Yes
numNodes
-
The number of nodes associated with a multi-node parallel job.
Type: Integer
Required: Yes
nodeRangeProperties
-
A list of node ranges and their properties associated with a multi-node parallel job.
Type: Array of NodeRangeProperty objects
Required: Yes
targetNodes
-
The range of nodes, using node index values. A range of
0:3
indicates nodes with index values of0
through3
. If the starting range value is omitted (:n
), then 0is used to start the range. If the ending range value is omitted (
n:
), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes (0:n
). You can nest node ranges, for example0:10
and4:5
, in which case the4:5
range properties override the0:10
properties.Type: String
Required: No
container
-
The container details for the node range. For more information, see Container properties.
Type: ContainerProperties object
Required: No
Retry strategy
retryStrategy
-
When you register a job definition, you can optionally specify a retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that's specified during a SubmitJob operation overrides the retry strategy defined here. By default, each job is attempted one time. If you specify more than one attempt, the job is retried if it fails. Examples of a fail attempt include the job returns a non-zero exit code or the container instance is terminated. For more information, see Automated job retries.
Type: RetryStrategy object
Required: No
attempts
-
The number of times to move a job to the
RUNNABLE
status. You can specify between 1 and 10 attempts. Ifattempts
is greater than one, the job is retried that many times if it fails, until it has moved toRUNNABLE
."attempts":
integer
Type: Integer
Required: No
evaluateOnExit
-
Array of up to 5 objects that specify conditions under which the job should be retried or failed. If this parameter is specified, then the
attempts
parameter must also be specified."evaluateOnExit": [ { "action": "
string
", "onExitCode": "string
", "onReason": "string
", "onStatusReason": "string
" } ]Type: Array of EvaluateOnExit objects
Required: No
action
-
Specifies the action to take if all of the specified conditions (
onStatusReason
,onReason
, andonExitCode
) are met. The values aren't case sensitive.Type: String
Required: Yes
Valid values:
RETRY
|EXIT
onExitCode
-
Contains a glob pattern to match against the decimal representation of the
ExitCode
that's returned for a job. The pattern can be up to 512 characters long. It can contain only numbers (not letters or other special characters). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.Type: String
Required: No
onReason
-
Contains a glob pattern to match against the
Reason
that's returned for a job. The pattern can be up to 512 characters long. It can contain letters, numbers, periods (.), colons (:), and white space (spaces, tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.Type: String
Required: No
onStatusReason
-
Contains a glob pattern to match against the
StatusReason
that's returned for a job. The pattern can be up to 512 characters long. It can contain letters, numbers, periods (.), colons (:), and white space (spaces, tabs). and can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.Type: String
Required: No
Tags
tags
-
Key-value pair tags to associate with the job definition. For more information, see Tagging your AWS Batch resources.
Type: String to string map
Required: No
Timeout
timeout
-
You can configure a timeout duration for your jobs so that if a job runs longer than that, AWS Batch terminates the job. For more information, see Job Timeouts. If a job is terminated due to a timeout, it isn't retried. Any timeout configuration that's specified during a SubmitJob operation overrides the timeout configuration defined here. For more information, see Job Timeouts.
Type: JobTimeout object
Required: No
attemptDurationSeconds
-
The time duration in seconds (measured from the job attempt's
startedAt
timestamp) after which AWS Batch terminates unfinished jobs. The minimum value for the timeout is 60 seconds.Type: Integer
Required: No