

# Specifying task settings for AWS Database Migration Service tasks
Task settings

Each task has settings that you can configure according to the needs of your database migration. You create these settings in a JSON file or, with some settings, you can specify the settings using the AWS DMS console. For information about how to use a task configuration file to set task settings, see [Task settings example](#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

There are several main types of task settings, as listed following.

**Topics**
+ [

## Task settings example
](#CHAP_Tasks.CustomizingTasks.TaskSettings.Example)
+ [

# Target metadata task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md)
+ [

# Full-load task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md)
+ [

# Time Travel task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.md)
+ [

# Logging task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.md)
+ [

# Control table task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.md)
+ [

# Stream buffer task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.StreamBuffer.md)
+ [

# Change processing tuning settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.md)
+ [

# Data validation task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.DataValidation.md)
+ [

# Data resync settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.DataResyncSettings.md)
+ [

# Task settings for change processing DDL handling
](CHAP_Tasks.CustomizingTasks.TaskSettings.DDLHandling.md)
+ [

# Character substitution task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution.md)
+ [

# Before image task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.BeforeImage.md)
+ [

# Error handling task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling.md)
+ [

# Saving task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.Saving.md)


| Task settings | Relevant documentation | 
| --- | --- | 
|   **Creating a task assessment report**  You can create a task assessment report that shows any unsupported data types that could cause problems during migration. You can run this report on your task before running the task to find out potential issues.  |  [Enabling and working with premigration assessments for a task](CHAP_Tasks.AssessmentReport.md)  | 
|   **Creating a task**  When you create a task, you specify the source, target, and replication instance, along with any migration settings.  |  [Creating a task](CHAP_Tasks.Creating.md)  | 
|   **Creating an ongoing replication task**  You can set up a task to provide continuous replication between the source and target.   |  [Creating tasks for ongoing replication using AWS DMS](CHAP_Task.CDC.md)  | 
|   **Applying task settings**  Each task has settings that you can configure according to the needs of your database migration. You create these settings in a JSON file or, with some settings, you can specify the settings using the AWS DMS console.  |  [Specifying task settings for AWS Database Migration Service tasks](#CHAP_Tasks.CustomizingTasks.TaskSettings)  | 
|   **Data validation**  Use data validation to have AWS DMS compare the data on your target data store with the data from your source data store.  |  [AWS DMS data validation](CHAP_Validating.md)  | 
|   **Modifying a task**  When a task is stopped, you can modify the settings for the task.  |  [Modifying a task](CHAP_Tasks.Modifying.md)  | 
|   **Reloading tables during a task**  You can reload a table during a task if an error occurs during the task.  |  [Reloading tables during a task](CHAP_Tasks.ReloadTables.md)  | 
|   **Using table mapping**  Table mapping uses several types of rules to specify task settings for the data source, source schema, data, and any transformations that should occur during the task.  |  Selection Rules [Selection rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.md) Transformation Rules [Transformation rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.md)  | 
|   **Applying filters**  You can use source filters to limit the number and type of records transferred from your source to your target. For example, you can specify that only employees with a location of headquarters are moved to the target database. You apply filters on a column of data.  |  [Using source filters](CHAP_Tasks.CustomizingTasks.Filters.md)  | 
| Monitoring a task There are several ways to get information on the performance of a task and the tables used by the task.  |  [Monitoring AWS DMS tasks](CHAP_Monitoring.md)  | 
| Managing task logs You can view and delete task logs using the AWS DMS API or AWS CLI.   |  [Viewing and managing AWS DMS task logs](CHAP_Monitoring.md#CHAP_Monitoring.ManagingLogs)  | 

## Task settings example


You can use either the AWS Management Console or the AWS CLI to create a replication task. If you use the AWS CLI, you set task settings by creating a JSON file, then specifying the file:// URI of the JSON file as the [ ReplicationTaskSettings](https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html#DMS-CreateReplicationTask-request-ReplicationTaskSettings) parameter of the [CreateReplicationTask](https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html) operation.

The following example shows how to use the AWS CLI to call the `CreateReplicationTask` operation:

```
aws dms create-replication-task \
--replication-task-identifier MyTask \
--source-endpoint-arn arn:aws:dms:us-west-2:123456789012:endpoint:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--target-endpoint-arn arn:aws:dms:us-west-2:123456789012:endpoint:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--replication-instance-arn arn:aws:dms:us-west-2:123456789012:rep:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--migration-type cdc \
--table-mappings file://tablemappings.json \
--replication-task-settings file://settings.json
```

The preceding example uses a table mapping file called `tablemappings.json`. For table mapping examples, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).

A task settings JSON file can look like the following. 

```
{
  "TargetMetadata": {
    "TargetSchema": "",
    "SupportLobs": true,
    "FullLobMode": false,
    "LobChunkSize": 64,
    "LimitedSizeLobMode": true,
    "LobMaxSize": 32,
    "InlineLobMaxSize": 0,
    "LoadMaxFileSize": 0,
    "ParallelLoadThreads": 0,
    "ParallelLoadBufferSize":0,
    "ParallelLoadQueuesPerThread": 1,
    "ParallelApplyThreads": 0,
    "ParallelApplyBufferSize": 100,
    "ParallelApplyQueuesPerThread": 1,    
    "BatchApplyEnabled": false,
    "TaskRecoveryTableEnabled": false
  },
  "FullLoadSettings": {
    "TargetTablePrepMode": "DO_NOTHING",
    "CreatePkAfterFullLoad": false,
    "StopTaskCachedChangesApplied": false,
    "StopTaskCachedChangesNotApplied": false,
    "MaxFullLoadSubTasks": 8,
    "TransactionConsistencyTimeout": 600,
    "CommitRate": 10000
  },
    "TTSettings" : {
    "EnableTT" : true,
    "TTS3Settings": {
        "EncryptionMode": "SSE_KMS",
        "ServerSideEncryptionKmsKeyId": "arn:aws:kms:us-west-2:112233445566:key/myKMSKey",
        "ServiceAccessRoleArn": "arn:aws:iam::112233445566:role/dms-tt-s3-access-role",
        "BucketName": "myttbucket",
        "BucketFolder": "myttfolder",
        "EnableDeletingFromS3OnTaskDelete": false
      },
    "TTRecordSettings": {
        "EnableRawData" : true,
        "OperationsToLog": "DELETE,UPDATE",
        "MaxRecordSize": 64
      }
  },
  "Logging": {
    "EnableLogging": false
  },
  "ControlTablesSettings": {
    "ControlSchema":"",
    "HistoryTimeslotInMinutes":5,
    "HistoryTableEnabled": false,
    "SuspendedTablesTableEnabled": false,
    "StatusTableEnabled": false
  },
  "StreamBufferSettings": {
    "StreamBufferCount": 3,
    "StreamBufferSizeInMB": 8
  },
  "ChangeProcessingTuning": { 
    "BatchApplyPreserveTransaction": true, 
    "BatchApplyTimeoutMin": 1, 
    "BatchApplyTimeoutMax": 30, 
    "BatchApplyMemoryLimit": 500, 
    "BatchSplitSize": 0, 
    "MinTransactionSize": 1000, 
    "CommitTimeout": 1, 
    "MemoryLimitTotal": 1024, 
    "MemoryKeepTime": 60, 
    "StatementCacheSize": 50 
  },
  "ChangeProcessingDdlHandlingPolicy": {
    "HandleSourceTableDropped": true,
    "HandleSourceTableTruncated": true,
    "HandleSourceTableAltered": true
  },
  "LoopbackPreventionSettings": {
    "EnableLoopbackPrevention": true,
    "SourceSchema": "LOOP-DATA",
    "TargetSchema": "loop-data"
  },

  "CharacterSetSettings": {
    "CharacterReplacements": [ {
        "SourceCharacterCodePoint": 35,
        "TargetCharacterCodePoint": 52
      }, {
        "SourceCharacterCodePoint": 37,
        "TargetCharacterCodePoint": 103
      }
    ],
    "CharacterSetSupport": {
      "CharacterSet": "UTF16_PlatformEndian",
      "ReplaceWithCharacterCodePoint": 0
    }
  },
  "BeforeImageSettings": {
    "EnableBeforeImage": false,
    "FieldName": "",  
    "ColumnFilter": "pk-only"
  },
  "ErrorBehavior": {
    "DataErrorPolicy": "LOG_ERROR",
    "DataTruncationErrorPolicy":"LOG_ERROR",
    "DataMaskingErrorPolicy": "STOP_TASK",
    "DataErrorEscalationPolicy":"SUSPEND_TABLE",
    "DataErrorEscalationCount": 50,
    "TableErrorPolicy":"SUSPEND_TABLE",
    "TableErrorEscalationPolicy":"STOP_TASK",
    "TableErrorEscalationCount": 50,
    "RecoverableErrorCount": 0,
    "RecoverableErrorInterval": 5,
    "RecoverableErrorThrottling": true,
    "RecoverableErrorThrottlingMax": 1800,
    "ApplyErrorDeletePolicy":"IGNORE_RECORD",
    "ApplyErrorInsertPolicy":"LOG_ERROR",
    "ApplyErrorUpdatePolicy":"LOG_ERROR",
    "ApplyErrorEscalationPolicy":"LOG_ERROR",
    "ApplyErrorEscalationCount": 0,
    "FullLoadIgnoreConflicts": true
  },
  "ValidationSettings": {
    "EnableValidation": false,
    "ValidationMode": "ROW_LEVEL",
    "ThreadCount": 5,
    "PartitionSize": 10000,
    "FailureMaxCount": 1000,
    "RecordFailureDelayInMinutes": 5,
    "RecordSuspendDelayInMinutes": 30,
    "MaxKeyColumnSize": 8096,
    "TableFailureMaxCount": 10000,
    "ValidationOnly": false,
    "HandleCollationDiff": false,
    "RecordFailureDelayLimitInMinutes": 1,
    "SkipLobColumns": false,
    "ValidationPartialLobSize": 0,
    "ValidationQueryCdcDelaySeconds": 0
  }
}
```

# Target metadata task settings
Target metadata task settings

Target metadata settings include the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ `TargetSchema` – The target table schema name. If this metadata option is empty, the schema from the source table is used. AWS DMS automatically adds the owner prefix for the target database to all tables if no source schema is defined. This option should be left empty for MySQL-type target endpoints. Renaming a schema in data mapping takes precedence over this setting.
+ LOB settings – Settings that determine how large objects (LOBs) are managed. If you set `SupportLobs=true`, you must set one of the following to `true`: 
  + `FullLobMode` – If you set this option to `true`, then you must enter a value for the `LobChunkSize` option. Enter the size, in kilobytes, of the LOB chunks to use when replicating the data to the target. The `FullLobMode` option works best for very large LOB sizes but tends to cause slower loading. The recommended value for `LobChunkSize` is 64 kilobytes. Increasing the value for `LobChunkSize` above 64 kilobytes can cause task failures.
  + `InlineLobMaxSize` – This value determines which LOBs AWS DMS transfers inline during a full load. Transferring small LOBs is more efficient than looking them up from a source table. During a full load, AWS DMS checks all LOBs and performs an inline transfer for the LOBs that are smaller than `InlineLobMaxSize`. AWS DMS transfers all LOBs larger than the `InlineLobMaxSize` in `FullLobMode`. The default value for `InlineLobMaxSize` is 0 and the range is 1 –102400 kilobytes (100 MB). Set a value for `InlineLobMaxSize` only if you know that most of the LOBs are smaller than the value specified in `InlineLobMaxSize`.
  + `LimitedSizeLobMode` – If you set this option to `true`, then you must enter a value for the `LobMaxSize` option. Enter the maximum size, in kilobytes, for an individual LOB. The maximum value for `LobMaxSize` is 102400 kilobytes (100 MB).

  For more information about the criteria for using these task LOB settings, see [Setting LOB support for source databases in an AWS DMS task](CHAP_Tasks.LOBSupport.md). You can also control the management of LOBs for individual tables. For more information, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).
+ `BatchApplyEnabled` – Determines if each transaction is applied individually or if changes are committed in batches. The default value is `false`.

  When `BatchApplyEnabled` is set to `true`, DMS requires a Primary Key (PK) or Unique Key (UK) on the **source** table(s). Without a PK or UK on source tables, only batch inserts are applied but not batch updates and deletes.

  When `BatchApplyEnabled` is set to `true`, AWS DMS generates an error message if a **target** table has a unique constraint and a primary key. Target tables with both a unique constraint and primary key aren't supported when `BatchApplyEnabled` is set to `true`.

  When `BatchApplyEnabled` is set to true and AWS DMS encounters a data error from a table with the default error-handling policy, the AWS DMS task switches from batch mode to one-by-one mode for the rest of the tables. To alter this behavior, you can set the `"SUSPEND_TABLE"` action on the following policies in the `"ErrorBehavior"` group property of the task settings JSON file:
  + `DataErrorPolicy`
  + `ApplyErrorDeletePolicy`
  + `ApplyErrorInsertPolicy`
  + `ApplyErrorUpdatePolicy`

  For more information on this `"ErrorBehavior"` group property, see the example task settings JSON file in [Specifying task settings for AWS Database Migration Service tasks](CHAP_Tasks.CustomizingTasks.TaskSettings.md). After setting these policies to `"SUSPEND_TABLE"`, the AWS DMS task then suspends data errors on any tables that raise them and continues in batch mode for all tables.

  You can use the `BatchApplyEnabled` parameter with the `BatchApplyPreserveTransaction` parameter. If `BatchApplyEnabled` is set to `true`, then the `BatchApplyPreserveTransaction` parameter determines the transactional integrity. 

  If `BatchApplyPreserveTransaction` is set to `true`, then transactional integrity is preserved and a batch is guaranteed to contain all the changes within a transaction from the source.

  If `BatchApplyPreserveTransaction` is set to `false`, then there can be temporary lapses in transactional integrity to improve performance. 

  The `BatchApplyPreserveTransaction` parameter applies only to Oracle target endpoints, and is only relevant when the `BatchApplyEnabled` parameter is set to `true`.

  When LOB columns are included in the replication, you can use `BatchApplyEnabled` only in limited LOB mode.

  For more information about using these settings for a change data capture (CDC) load, see [Change processing tuning settings](CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.md).
+ `MaxFullLoadSubTasks` – indicates the maximum number of tables to load in parallel. The default is 8; the maximum value is 49.
+ `ParallelLoadThreads` – Specifies the number of threads that AWS DMS uses to load each table into the target database. This parameter has maximum values for non-RDBMS targets. The maximum value for a DynamoDB target is 200. The maximum value for an Amazon Kinesis Data Streams, Apache Kafka, or Amazon OpenSearch Service target is 32. You can ask to have this maximum limit increased. `ParallelLoadThreads` applies to Full Load tasks. For information on the settings for parallel load of individual tables, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).

  This setting applies to the following endpoint engine types:
  + DynamoDB
  + Amazon Kinesis Data Streams
  + Amazon MSK
  + Amazon OpenSearch Service
  + Amazon Redshift

  AWS DMS supports `ParallelLoadThreads` for MySQL as an extra connection attribute. `ParallelLoadThreads` does not apply to MySQL as a task setting. 
+ `ParallelLoadBufferSize` Specifies the maximum number of records to store in the buffer that the parallel load threads use to load data to the target. The default value is 50. The maximum value is 1,000. This setting is currently only valid when DynamoDB, Kinesis, Apache Kafka, or OpenSearch is the target. Use this parameter with `ParallelLoadThreads`. `ParallelLoadBufferSize` is valid only when there is more than one thread. For information on the settings for parallel load of individual tables, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).
+ `ParallelLoadQueuesPerThread` – Specifies the number of queues that each concurrent thread accesses to take data records out of queues and generate a batch load for the target. The default is 1. This setting is currently only valid when Kinesis or Apache Kafka is the target.
+ `ParallelApplyThreads` – Specifies the number of concurrent threads that AWS DMS uses during a CDC load to push data records to an Amazon DocumentDB, Kinesis, Amazon MSK, OpenSearch, or Amazon Redshift target endpoint. The default is zero (0).

  This setting only applies for CDC-only. This setting does not apply for Full Load.

  

  This setting applies to the following endpoint engine types:
  + Amazon DocumentDB (with MongoDB compatibility)
  + Amazon Kinesis Data Streams
  + Amazon Managed Streaming for Apache Kafka
  + Amazon OpenSearch Service
  + Amazon Redshift
+ `ParallelApplyBufferSize` – Specifies the maximum number of records to store in each buffer queue for concurrent threads to push to an Amazon DocumentDB, Kinesis, Amazon MSK, OpenSearch, or Amazon Redshift target endpoint during a CDC load. The default value is 100. The maximum value is 1000. Use this option when `ParallelApplyThreads` specifies more than one thread. 
+ `ParallelApplyQueuesPerThread` – Specifies the number of queues that each thread accesses to take data records out of queues and generate a batch load for an Amazon DocumentDB, Kinesis, Amazon MSK, or OpenSearch endpoint during CDC. The default value is 1.

# Full-load task settings
Full-load task settings

Full-load settings include the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ To indicate how to handle loading the target at full-load startup, specify one of the following values for the `TargetTablePrepMode` option: 
  +  `DO_NOTHING` – Data and metadata of the existing target table aren't affected. 
  +  `DROP_AND_CREATE` – The existing table is dropped and a new table is created in its place. 
  +  `TRUNCATE_BEFORE_LOAD` – Data is truncated without affecting the table metadata.
+ To delay primary key or unique index creation until after a full load completes, set the `CreatePkAfterFullLoad` option to `true`.
+ For full-load and CDC-enabled tasks, you can set the following options for `Stop task after full load completes`: 
  + `StopTaskCachedChangesApplied` – Set this option to `true` to stop a task after a full load completes and cached changes are applied. 
  + `StopTaskCachedChangesNotApplied` – Set this option to `true` to stop a task before cached changes are applied. 
+ To indicate the maximum number of tables to load in parallel, set the `MaxFullLoadSubTasks` option. The default is 8; the maximum value is 49.
+ Set the `ParallelLoadThreads` option to indicate how many concurrent threads DMS will employ during a full-load process to push data records to a target endpoint. Zero is the default value (0).
**Important**  
`MaxFullLoadSubTasks` controls the number of tables or table segments to load in parallel. `ParallelLoadThreads` controls the number of threads that are used by a migration task to execute the loads in parallel. *These settings are multiplicative*. As such, the total number of threads that are used during a full load task is approximately the result of the value of `ParallelLoadThreads `multiplied by the value of `MaxFullLoadSubTasks` (`ParallelLoadThreads` **\$1** `MaxFullLoadSubtasks)`.  
If you create tasks with a high number of Full Load sub tasks and a high number of parallel load threads, your task can consume too much memory and fail.
+ You can set the number of seconds that AWS DMS waits for transactions to close before beginning a full-load operation. To do so, if transactions are open when the task starts set the `TransactionConsistencyTimeout` option. The default value is 600 (10 minutes). AWS DMS begins the full load after the timeout value is reached, even if there are open transactions. A full-load-only task doesn't wait for 10 minutes but instead starts immediately.
+ To indicate the maximum number of records that can be transferred together, set the `CommitRate` option. The default value is 10000, and the maximum value is 50000.

# Time Travel task settings
Time Travel task settings

To log and debug replication tasks, you can use AWS DMS Time Travel. In this approach, you use Amazon S3 to store logs and encrypt them using your encryption keys. Only with access to your Time Travel S3 bucket, can you retrieve your S3 logs using date-time filters, then view, download, and obfuscate logs as needed. By doing this, you can securely "travel back in time" to investigate database activities. Time Travel works independently from the CloudWatch logging. For more information on CloudWatch logging, see [Logging task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.md). 

You can use Time Travel in all AWS Regions with AWS DMS-supported Oracle, Microsoft SQL Server, and PostgreSQL source endpoints, and AWS DMS-supported PostgreSQL and MySQL target endpoints. You can turn on Time Travel only for full-load and change data capture (CDC) tasks and for CDC-only tasks. To turn on Time Travel or to modify any existing Time Travel settings, ensure that your replication task is stopped.

The Time Travel settings include the `TTSettings` properties following:
+ `EnableTT` – If this option is set to `true`, Time Travel logging is turned on for the task. The default value is `false`.

  Type: Boolean

  Required: No
+ `EncryptionMode` – The type of server-side encryption being used on your S3 bucket to store your data and logs. You can specify either `"SSE_S3"` (the default) or `"SSE_KMS"`.

  You can change `EncryptionMode` from `"SSE_KMS"` to `"SSE_S3"`, but not the reverse.

  Type: String

  Required: No
+ `ServerSideEncryptionKmsKeyId` – If you specify `"SSE_KMS"` for `EncryptionMode`, provide the ID for your custom managed AWS KMS key. Make sure that the key that you use has an attached policy that turns on AWS Identity and Access Management (IAM) user permissions and allows use of the key. 

  Only your own custom-managed symmetric KMS key is supported with the `"SSE_KMS"` option.

  Type: String

  Required: Only if you set `EncryptionMode` to `"SSE_KMS"`
+ `ServiceAccessRoleArn` – The Amazon Resource Name (ARN) used by the service to access the IAM role. Set the role name to `dms-tt-s3-access-role`. This is a required setting that allows AWS DMS to write and read objects from an S3 bucket.

  Type: String

  Required: If Time Travel is turned on

  Following is an example policy for this role.

------
#### [ JSON ]

****  

  ```
  {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "kms:GenerateDataKey",
                  "kms:Decrypt",
                  "s3:ListBucket",
                  "s3:DeleteObject"
              ],
              "Resource": [
                  "arn:aws:s3:::S3bucketName*",
                  "arn:aws:kms:us-east-1:112233445566:key/1234a1a1-1m2m-1z2z-d1d2-12dmstt1234"
              ]
          }
      ]
  }
  ```

------

  Following is an example trust policy for this role.

------
#### [ JSON ]

****  

  ```
  {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": [
                       "dms.amazonaws.com"
                   ]
               },
               "Action": "sts:AssumeRole"
          }
      ]
  }
  ```

------
+ `BucketName` – The name of the S3 bucket to store Time Travel logs. Make sure to create this S3 bucket before turning on Time Travel logs.

  Type: String

  Required: If Time Travel is turned on
+ `BucketFolder` – An optional parameter to set a folder name in the S3 bucket. If you specify this parameter, DMS creates the Time Travel logs in the path `"/BucketName/BucketFolder/taskARN/YYYY/MM/DD/hh"`. If you don't specify this parameter, AWS DMS creates the default path as `"/BucketName/dms-time-travel-logs/taskARN/YYYY/MM/DD/hh`.

  Type: String

  Required: No
+ `EnableDeletingFromS3OnTaskDelete` – When this option is set to `true`, AWS DMS deletes the Time Travel logs from S3 if the task is deleted. The default value is `false`.

  Type: String

  Required: No
+ `EnableRawData` – When this option is set to `true`, the data manipulation language (DML) raw data for Time Travel logs appears under the `raw_data` column of the Time Travel logs. For the details, see [Using the Time Travel logs](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.LogSchema.md). The default value is `false`. When this option is set to `false`, only the type of DML is captured.

  Type: String

  Required: No
+ `RawDataFormat` – In AWS DMS versions 3.5.0 and higher, when `EnableRawData` is set to `true`. This property specifies a format for the raw data of the DML in a Time Travel log and can be presented as:
  + `"TEXT"` – Parsed, readable column names and values for DML events captured during CDC as `Raw` fields.
  + `"HEX"` – The original hexidecimal for column names and values captured for DML events during CDC.

  This property applies to Oracle and Microsoft SQL Server database sources.

  Type: String

  Required: No
+ `OperationsToLog` – Specifies the type of DML operations to log in Time Travel logs. You can specify one of the following:
  + `"INSERT"`
  + `"UPDATE"`
  + `"DELETE"`
  + `"COMMIT"`
  + `"ROLLBACK"`
  + `"ALL"`

  The default is `"ALL"`.

  Type: String

  Required: No
+ `MaxRecordSize` – Specifies the maximum size of Time Travel log records that are logged for each row. Use this property to control the growth of Time Travel logs for especially busy tables. The default is 64 KB.

  Type: Integer

  Required: No

For more information on turning on and using Time Travel logs, see the following topics.

**Topics**
+ [

# Turning on the Time Travel logs for a task
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.TaskEnabling.md)
+ [

# Using the Time Travel logs
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.LogSchema.md)
+ [

# How often AWS DMS uploads Time Travel logs to S3
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.UploadsToS3.md)

# Turning on the Time Travel logs for a task
Turning on Time Travel logs

You can turn on Time Travel for an AWS DMS task using the task settings described previously. Make sure that your replication task is stopped before you turn on Time Travel.

**To turn on Time Travel using the AWS CLI**

1. Create a DMS task configuration JSON file and add a `TTSettings` section such as the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

   ```
    .
    .
    .
       },
   "TTSettings" : {
     "EnableTT" : true,
     "TTS3Settings": {
         "EncryptionMode": "SSE_KMS",
         "ServerSideEncryptionKmsKeyId": "arn:aws:kms:us-west-2:112233445566:key/myKMSKey",
         "ServiceAccessRoleArn": "arn:aws:iam::112233445566:role/dms-tt-s3-access-role",
         "BucketName": "myttbucket",
         "BucketFolder": "myttfolder",
         "EnableDeletingFromS3OnTaskDelete": false
       },
     "TTRecordSettings": {
         "EnableRawData" : true,
         "OperationsToLog": "DELETE,UPDATE",
         "MaxRecordSize": 64
       },
    .
    .
    .
   ```

1. In an appropriate task action, specify this JSON file using the `--replication-task-settings` option. For example, the CLI code fragment following specifies this Time Travel settings file as part of `create-replication-task`.

   ```
   aws dms create-replication-task 
   --target-endpoint-arn arn:aws:dms:us-east-1:112233445566:endpoint:ELS5O7YTYV452CAZR2EYBNQGILFHQIFVPWFRQAY \
   --source-endpoint-arn arn:aws:dms:us-east-1:112233445566:endpoint:HNX2BWIIN5ZYFF7F6UFFZVWTDFFSMTNOV2FTXZA \
   --replication-instance-arn arn:aws:dms:us-east-1:112233445566:rep:ERLHG2UA52EEJJKFYNYWRPCG6T7EPUAB5AWBUJQ \
   --migration-type full-load-and-cdc --table-mappings 'file:///FilePath/mappings.json' \
   --replication-task-settings 'file:///FilePath/task-settings-tt-enabled.json' \
   --replication-task-identifier test-task
                               .
                               .
                               .
   ```

   Here, the name of this Time Travel settings file is `task-settings-tt-enabled.json`.

Similarly, you can specify this file as part of the `modify-replication-task` action.

Note the special handling of Time Travel logs for the task actions following:
+ `start-replication-task` – When you run a replication task, if an S3 bucket used for Time Travel isn't accessible, the task is marked as `FAILED`.
+ `stop-replication-task` – When the task stops, AWS DMS immediately pushes all Time Travel logs that are currently available for the replication instance to the S3 bucket used for Time Travel.

While a replication task runs, you can change the `EncryptionMode` value from `"SSE_KMS"` to `"SSE_S3"` but not the reverse.

If the size of Time Travel logs for an ongoing task exceeds 1 GB, DMS pushes the logs to S3 within five minutes of reaching that size. After a task is running, if the S3 bucket or KMS key becomes inaccessible, DMS stops pushing logs to this bucket. If you find your logs aren't being pushed to your S3 bucket, check your S3 and AWS KMS permissions. For more details on how often DMS pushes these logs to S3, see [How often AWS DMS uploads Time Travel logs to S3](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.UploadsToS3.md).

To turn on Time Travel for an existing task from the console, use the JSON editor option under **Task Settings** to add a `TTSettings` section.

# Using the Time Travel logs
Time Travel logs

*Time Travel log files* are comma-separated value (CSV) files with the fields following.

```
log_timestamp 
component 
dms_source_code_location 
transaction_id 
event_id 
event_timestamp 
lsn/scn 
primary_key
record_type 
event_type 
schema_name 
table_name 
statement 
action 
result 
raw_data
```

After your Time Travel logs are available in S3, you can directly access and query them with tools such as Amazon Athena. Or you can download the logs as you can any file from S3.

The example following shows a Time Travel log where transactions for a table called `mytable` are logged. The line endings for the following log are added for readability.

```
"log_timestamp ","tt_record_type","dms_source_code_location ","transaction_id",
"event_id","event_timestamp","scn_lsn","primary_key","record_type","event_type",
"schema_name","table_name","statement","action","result","raw_data"
"2021-09-23T01:03:00:778230","SOURCE_CAPTURE","postgres_endpoint_wal_engine.c:00819",
"609284109","565612992","2021-09-23 01:03:00.765321+00","00000E9C/D53AB518","","DML",
"UPDATE (3)","dmstest","mytable","","Migrate","","table dmstest.mytable:
UPDATE: id[bigint]:2244937 phone_number[character varying]:'phone-number-482'
age[integer]:82 gender[character]:'f' isactive[character]:'true ' 
date_of_travel[timestamp without time zone]:'2021-09-23 01:03:00.76593' 
description[text]:'TEST DATA TEST DATA TEST DATA TEST DATA'"
```

# How often AWS DMS uploads Time Travel logs to S3


To minimize the storage usage of your replication instance, AWS DMS offloads Time Travel logs from it periodically. 

The Time travel logs get pushed to your Amazon S3 bucket in the cases following:
+ If the current size of logs exceeds 1 GB, AWS DMS uploads the logs to S3 within five minutes. Thus, AWS DMS can make up to 12 calls an hour to S3 and AWS KMS for each running task.
+ AWS DMS uploads the logs to S3 every hour, regardless of the size of the logs.
+ When a task is stopped, AWS DMS immediately uploads the time travel logs to S3.

# Logging task settings
Logging task settings

Logging uses Amazon CloudWatch to log information during the migration process. Using logging task settings, you can specify which component activities are logged and what amount of information is written to the log. Logging task settings are written to a JSON file. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

You can turn on CloudWatch logging in several ways. You can select the `EnableLogging` option on the AWS Management Console when you create a migration task. Or, you can set the `EnableLogging` option to `true` when creating a task using the AWS DMS API. You can also specify `"EnableLogging": true` in the JSON of the logging section of task settings.

When you set `EnableLogging` to `true`, AWS DMS assigns the CloudWatch group name and stream name as follows. You can't set these values directly.
+ **CloudWatchLogGroup**: `dms-tasks-<REPLICATION_INSTANCE_IDENTIFIER>`
+ **CloudWatchLogStream**: `dms-task-<REPLICATION_TASK_EXTERNAL_RESOURCE_ID>`

`<REPLICATION_INSTANCE_IDENTIFIER>` is the identifier of the replication instance. `<REPLICATION_TASK_EXTERNAL_RESOURCE_ID>` is the value of the `<resourcename>` section of the Task ARN. For information about how AWS DMS generates resource ARNs, see [Constructing an Amazon Resource Name (ARN) for AWS DMS](CHAP_Introduction.AWS.ARN.md).

CloudWatch integrates with AWS Identity and Access Management (IAM), and you can specify which CloudWatch actions a user in your AWS account can perform. For more information about working with IAM in CloudWatch, see [Identity and access management for amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/auth-and-access-control-cw.html) and [Logging Amazon CloudWatch API calls](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/logging_cw_api_calls.html) in the *Amazon CloudWatch User Guide.*

To delete the task logs, you can set `DeleteTaskLogs` to true in the JSON of the logging section of the task settings.

You can specify logging for the following types of events:
+ `FILE_FACTORY` – The file factory manages files used for batch apply and batch load, and manages Amazon S3 endpoints.
+ `METADATA_MANAGER` – The metadata manager manages source and target metadata, partitioning, and table state during replication.
+ `SORTER` – The `SORTER` receives incoming events from the `SOURCE_CAPTURE` process. The events are batched in transactions, and passed to the `TARGET_APPLY` service component. If the `SOURCE_CAPTURE` process produces events faster than the `TARGET_APPLY` component can consume them, the `SORTER` component caches the backlogged events to disk or to a swap file. Cached events are a common cause for running out of storage in replication instances.

  The `SORTER` service component manages cached events, gathers CDC statistics, and reports task latency.
+ `SOURCE_CAPTURE` – Ongoing replication (CDC) data is captured from the source database or service, and passed to the SORTER service component.
+ `SOURCE_UNLOAD` – Data is unloaded from the source database or service during Full Load.
+ `TABLES_MANAGER` — The table manager tracks captured tables, manages the order of table migration, and collects table statistics.
+ `TARGET_APPLY` – Data and data definition language (DDL) statements are applied to the target database.
+ `TARGET_LOAD` – Data is loaded into the target database.
+ `TASK_MANAGER` – The task manager manages running tasks, and breaks tasks down into sub-tasks for parallel data processing.
+ `TRANSFORMATION` – Table-mapping transformation events. For more information, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).
+ `VALIDATOR/ VALIDATOR_EXT` – The `VALIDATOR` service component verifies that data was migrated accurately from the source to the target. For more information, see [Data validation](CHAP_Validating.md). 
+ `DATA_RESYNC` – Common component of Data resync feature that manages data resync flow. For more information, see [AWS DMS data resync](CHAP_Validating.DataResync.md).
+ `RESYNC_UNLOAD` – Data is unloaded from the source database or service during resync process.
+ `RESYNC_APPLY` – Data manipulation language (DML) statements are applied to the target database during resync.

The following logging components generate a large amount of logs when using the `LOGGER_SEVERITY_DETAILED_DEBUG` log severity level:
+ `COMMON`
+ `ADDONS`
+ `DATA_STRUCTURE`
+ `COMMUNICATION`
+ `FILE_TRANSFER`
+ `FILE_FACTORY`

Logging levels other than `DEFAULT` are rarely needed for these components during troubleshooting. We do not recommend changing the logging level from `DEFAULT` for these components unless specifically requested by AWS Support.

After you specify one of the preceding, you can then specify the amount of information that is logged, as shown in the following list. 

The levels of severity are in order from lowest to highest level of information. The higher levels always include information from the lower levels. 
+ LOGGER\$1SEVERITY\$1ERROR – Error messages are written to the log.
+ LOGGER\$1SEVERITY\$1WARNING – Warnings and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1INFO – Informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DEFAULT – Informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DEBUG – Debug messages, informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DETAILED\$1DEBUG – All information is written to the log.

The following JSON example shows task settings for logging all actions and levels of severity.

```
…
  "Logging": {
    "EnableLogging": true,
    "LogComponents": [
      {
        "Id": "FILE_FACTORY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "METADATA_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SORTER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TABLES_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_INFO"
      },{
        "Id": "TASK_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEBUG"
      },{
        "Id": "TRANSFORMATION",
        "Severity": "LOGGER_SEVERITY_DEBUG"
      },{
        "Id": "VALIDATOR",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      }
    ],
    "CloudWatchLogGroup": null,
    "CloudWatchLogStream": null
  }, 
…
```

# Control table task settings
Control table task settings

Control tables provide information about an AWS DMS task. They also provide useful statistics that you can use to plan and manage both the current migration task and future tasks. You can apply these task settings in a JSON file or by choosing **Advanced Settings** on the **Create task** page in the AWS DMS console. The Apply Exceptions table (`dmslogs.awsdms_apply_exceptions`) is always created on database targets. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

AWS DMS only creates control tables only during Full Load \$1 CDC or CDC-only tasks, and not during Full Load Only tasks. 

For full load and CDC (Migrate existing data and replicate ongoing changes) and CDC only (Replicate data changes only) tasks, you can also create additional tables, including the following:
+ **Replication Status (dmslogs.awsdms\$1status)** – This table provides details about the current task. These include task status, amount of memory consumed by the task, and the number of changes not yet applied to the target. This table also gives the position in the source database where AWS DMS is currently reading. Also, it indicates if the task is in the full load phase or change data capture (CDC).
+ **Suspended Tables (dmslogs.awsdms\$1suspended\$1tables)** – This table provides a list of suspended tables as well as the reason they were suspended.
+ **Replication History (dmslogs.awsdms\$1history)** – This table provides information about replication history. This information includes the number and volume of records processed during the task, latency at the end of a CDC task, and other statistics.

The Apply Exceptions table (`dmslogs.awsdms_apply_exceptions`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TABLE\$1OWNER  |  nvchar  |  The table owner.  | 
|  TABLE\$1NAME  |  nvchar  |  The table name.  | 
|  ERROR\$1TIME  |  timestamp  |  The time the exception (error) occurred.  | 
|  STATEMENT  |  nvchar  |  The statement that was being run when the error occurred.  | 
|  ERROR  |  nvchar  |  The error name and description.  | 

The Replication Status table (`dmslogs.awsdms_status`) contains the current status of the task and the target database. It has the following settings.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TASK\$1STATUS  |  varchar  |  One of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.html) Task status is set to FULL LOAD as long as there is at least one table in full load. After all tables have been loaded, the task status changes to CHANGE PROCESSING if CDC is enabled. The task is set to NOT RUNNING before you start the task, or after the task completes.  | 
| STATUS\$1TIME |  timestamp  |  The timestamp of the task status.  | 
|  PENDING\$1CHANGES  |  int  |  The number of change records that were committed in the source database and cached in the memory and disk of your replication instance.  | 
|  DISK\$1SWAP\$1SIZE  |  int  |  The amount of disk space used by old or offloaded transactions.  | 
| TASK\$1MEMORY |  int  |  Current memory used, in MB.  | 
|  SOURCE\$1CURRENT \$1POSITION  |  varchar  |  The position in the source database that AWS DMS is currently reading from.  | 
|  SOURCE\$1CURRENT \$1TIMESTAMP  |  timestamp  |  The timestamp in the source database that AWS DMS is currently reading from.  | 
|  SOURCE\$1TAIL \$1POSITION  |  varchar  |  The position of the oldest start transaction that isn't committed. This value is the newest position that you can revert to without losing any changes.  | 
|  SOURCE\$1TAIL \$1TIMESTAMP  |  timestamp  |  The timestamp of the oldest start transaction that isn't committed. This value is the newest timestamp that you can revert to without losing any changes.  | 
|  SOURCE\$1TIMESTAMP \$1APPLIED  |  timestamp  |  The timestamp of the last transaction commit. In a bulk apply process, this value is the timestamp for the commit of the last transaction in the batch.  | 

The Suspended table (`dmslogs.awsdms_suspended_tables`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The name of the AWS DMS task  | 
|  TABLE\$1OWNER  |  nvchar  |  The table owner.  | 
|  TABLE\$1NAME  |  nvchar  |  The table name.  | 
|  SUSPEND\$1REASON  |  nvchar  |  Reason for suspension.  | 
|  SUSPEND\$1TIMESTAMP  |  timestamp  |  The time the suspension occurred.  | 

The Replication History table (`dmslogs.awsdms_history`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TIMESLOT\$1TYPE  |  varchar  |  One of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.html) If the task is running both full load and CDC, two history records are written to the time slot.  | 
| TIMESLOT |  timestamp  |  The ending timestamp of the time slot.  | 
|  TIMESLOT\$1DURATION  |  int  |  The duration of the time slot, in minutes.  | 
|  TIMESLOT\$1LATENCY  |  int  |  The target latency at the end of the time slot, in seconds. This value only applies to CDC time slots.  | 
| RECORDS |  int  |  The number of records processed during the time slot.  | 
|  TIMESLOT\$1VOLUME  |  int  |  The volume of data processed in MB.  | 

The Validation Failure table (`awsdms_validation_failures_v1`) contains all the data validation failures for a task. For more information see, [Data Validation Troubleshooting](CHAP_Validating.md#CHAP_Validating.Troubleshooting).

Additional control table settings include the following:
+ `HistoryTimeslotInMinutes` – Use this option to indicate the length of each time slot in the Replication History table. The default is 5 minutes.
+ `ControlSchema` – Use this option to indicate the database schema name for the control tables for the AWS DMS target. If you don't enter any information for this option, then the tables are copied to the default location in the database as listed following: 
  + PostgreSQL, Public
  + Oracle, the target schema
  + Microsoft SQL Server, dbo in the target database
  + MySQL, awsdms\$1control
  + MariaDB, awsdms\$1control
  + Amazon Redshift, Public
  + DynamoDB, created as individual tables in the database
  + IBM Db2 LUW, awsdms\$1control

# Stream buffer task settings
Stream buffer task settings

You can set stream buffer settings using the AWS CLI, including the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 
+ `StreamBufferCount` – Use this option to specify the number of data stream buffers for the migration task. The default stream buffer number is 3. Increasing the value of this setting might increase the speed of data extraction. However, this performance increase is highly dependent on the migration environment, including the source system and instance class of the replication server. The default is sufficient for most situations.
+ `StreamBufferSizeInMB` – Use this option to indicate the maximum size of each data stream buffer. The default size is 8 MB. You might need to increase the value for this option when you work with very large LOBs. You also might need to increase the value if you receive a message in the log files that the stream buffer size is insufficient. When calculating the size of this option, you can use the following equation:` [Max LOB size (or LOB chunk size)]*[number of LOB columns]*[number of stream buffers]*[number of tables loading in parallel per task(MaxFullLoadSubTasks)]*3`
+ `CtrlStreamBufferSizeInMB` – Use this option to set the size of the control stream buffer. The value is in megabytes, and can be 1–8. The default value is 5. You might need to increase this when working with a very large number of tables, such as tens of thousands of tables.

# Change processing tuning settings
Change processing tuning settings

The following settings determine how AWS DMS handles changes for target tables during change data capture (CDC). Several of these settings depend on the value of the target metadata parameter `BatchApplyEnabled`. For more information on the `BatchApplyEnabled` parameter, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

Change processing tuning settings include the following:

The following settings apply only when the target metadata parameter `BatchApplyEnabled` is set to `true`.
+ `BatchApplyPreserveTransaction` – If set to `true`, transactional integrity is preserved and a batch is guaranteed to contain all the changes within a transaction from the source. The default value is `true`. This setting applies only to Oracle target endpoints.

  If set to `false`, there can be temporary lapses in transactional integrity to improve performance. There is no guarantee that all the changes within a transaction from the source are applied to the target in a single batch. 

  By default, AWS DMS processes changes in a transactional mode, which preserves transactional integrity. If you can afford temporary lapses in transactional integrity, you can use the batch optimized apply option instead. This option efficiently groups transactions and applies them in batches for efficiency purposes. Using the batch optimized apply option almost always violates referential integrity constraints. So we recommend that you turn these constraints off during the migration process and turn them on again as part of the cutover process. 
+ `BatchApplyTimeoutMin` – Sets the minimum amount of time in seconds that AWS DMS waits between each application of batch changes. The default value is 1.
+ `BatchApplyTimeoutMax` – Sets the maximum amount of time in seconds that AWS DMS waits between each application of batch changes before timing out. The default value is 30.
+ `BatchApplyMemoryLimit` – Sets the maximum amount of memory in (MB) to use for pre-processing in **Batch optimized apply mode**. The default value is 500.
+ `BatchSplitSize` – Sets the maximum number of changes applied in a single batch. The default value 0, meaning there is no limit applied.

The following settings apply only when the target metadata parameter `BatchApplyEnabled` is set to `false`.
+ `MinTransactionSize` – Sets the minimum number of changes to include in each transaction. The default value is 1000.
+ `CommitTimeout` – Sets the maximum time in seconds for AWS DMS to collect transactions in batches before declaring a timeout. The default value is 1.

For bidirectional replication, the following setting applies only when the target metadata parameter `BatchApplyEnabled` is set to `false`.
+ `LoopbackPreventionSettings` – These settings provide loopback prevention for each ongoing replication task in any pair of tasks involved in bidirectional replication. *Loopback prevention* prevents identical changes from being applied in both directions of the bidirectional replication, which can corrupt data. For more information about bidirectional replication, see [Performing bidirectional replication](CHAP_Task.CDC.md#CHAP_Task.CDC.Bidirectional).

AWS DMS attempts to keep transaction data in memory until the transaction is fully committed to the source, the target, or both. However, transactions that are larger than the allocated memory or that aren't committed within the specified time limit are written to disk.

The following settings apply to change processing tuning regardless of the change processing mode.
+ `MemoryLimitTotal` – Sets the maximum size (in MB) that all transactions can occupy in memory before being written to disk. The default value is 1024.
+ `MemoryKeepTime` – Sets the maximum time in seconds that each transaction can stay in memory before being written to disk. The duration is calculated from the time that AWS DMS started capturing the transaction. The default value is 60. 
+ `StatementCacheSize` – Sets the maximum number of prepared statements to store on the server for later execution when applying changes to the target. The default value is 50, and the maximum value is 200.
+ `RecoveryTimeout`– When resuming a task in CDC mode, RecoveryTimeout controls how long (in minutes) the task will wait to encounter the resume checkpoint. If the checkpoint is not encountered within the configured timeframe, the task will fail. The default behavior is to wait indefinitely for the checkpoint event.

Example of how task settings that handle Change Processing Tuning appear in a task setting JSON file:

```
"ChangeProcessingTuning": {
        "BatchApplyPreserveTransaction": true,
        "BatchApplyTimeoutMin": 1,
        "BatchApplyTimeoutMax": 30,
        "BatchApplyMemoryLimit": 500,
        "BatchSplitSize": 0,
        "MinTransactionSize": 1000,
        "CommitTimeout": 1,
        "MemoryLimitTotal": 1024,
        "MemoryKeepTime": 60,
        "StatementCacheSize": 50
        "RecoveryTimeout": -1
}
```

To control the frequency of writes to an Amazon S3 target during a data replication task, you can configure the `cdcMaxBatchInterval` and `cdcMinFileSize` extra connection attributes. This can result in better performance when analyzing the data without any additional overhead operations. For more information, see [Endpoint settings when using Amazon S3 as a target for AWS DMS](CHAP_Target.S3.md#CHAP_Target.S3.Configuring).

# Data validation task settings
Data validation task settings

You can ensure that your data was migrated accurately from the source to the target. If you enable validation for a task, AWS DMS begins comparing the source and target data immediately after a full load is performed for a table. For more information about task data validation, its requirements, the scope of its database support, and the metrics it reports, see [AWS DMS data validation](CHAP_Validating.md). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

 The data validation settings and their values include the following:
+ `EnableValidation` – Enables data validation when set to true. Otherwise, validation is disabled for the task. The default value is false.
+ `ValidationMode` – controls how AWS DMS validates data in the target table against the source table. Starting with replication engine version 3.5.4, DMS automatically sets this to `GROUP_LEVEL` for supported migration paths, delivering enhanced validation performance and significantly faster processing for large datasets. This enhancement applies to migrations for the migration paths listed in [AWS DMS data resync](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.DataResync.html#CHAP_DataResync.limitations). For all other migration paths, the validation mode defaults to `ROW_LEVEL`. 
**Note**  
Irrespective of the setting, AWS DMS validates all rows configured via table validation.
+ `FailureMaxCount` – Specifies the maximum number of records that can fail validation before validation is suspended for the task. The default value is 10,000. If you want the validation to continue regardless of the number of records that fail validation, set this value higher than the number of records in the source.
+ `HandleCollationDiff` – When this option is set to `true`, the validation accounts for column collation differences between source and target databases. Otherwise, any such differences in column collation are ignored for validation. Column collations can dictate the order of rows, which is important for data validation. Setting `HandleCollationDiff` to true resolves those collation differences automatically and prevents false positives in data validation. The default value is `false`.
+ `RecordFailureDelayInMinutes` – Specifies the delay time in minutes before reporting any validation failure details.

  If overall DMS Task CDC latency is greater than the value of `RecordFailureDelayInMinutesthen` it takes the precedence, for example, if `RecordFailureDelayInMinutes` is 5, and CDC Latency is 7 mins then DMS waits 7 minutes to report the validation failure details.
+ `RecordFailureDelayLimitInMinutes` – Specifies the delay before reporting any validation failure details. AWS DMS uses the task latency to recognize actual delay for changes to make it to the target in order to prevent false positives. This setting overrides the actual delay and DMS task CDC Latency value and enables you to set a greater delay before reporting any validation metrics. The default value is 0.
+ `RecordSuspendDelayInMinutes` – Specifies the delay time in minutes before tables are suspended from validation due to error threshold set in `FailureMaxCount`.
+ `SkipLobColumns` – When this option is set to `true`, AWS DMS skips data validation for all the LOB columns in the table's part of the task validation. The default value is `false`.
+ `TableFailureMaxCount` – Specifies the maximum number of rows in one table that can fail validation before validation is suspended for the table. The default value is 1,000. 
+ `ThreadCount` – Specifies the number of execution threads that AWS DMS uses during validation. Each thread selects not-yet-validated data from the source and target to compare and validate. The default value is 5. If you set `ThreadCount` to a higher number, AWS DMS can complete the validation faster. However, AWS DMS then runs more simultaneous queries, consuming more resources on the source and the target.
+ `ValidationOnly` – When this option is set to `true`, the task performs data validation without performing any migration or replication of data. The default value is `false`. You can't modify the `ValidationOnly` setting after the task is created.

  You must set **TargetTablePrepMode** to `DO_NOTHING` (the default for a validation only task) and set **Migration Type** to one of the following:
  + Full Load — Set the task **Migration type** to **Migrate existing data** in the AWS DMS console. Or, in the AWS DMS API set the migration type to FULL-LOAD.
  + CDC — Set the task **Migration type** to** Replicate data changes only** in the AWS DMS console. Or, in the AWS DMS API set the migration type to CDC.

  Regardless of the migration type chosen, data isn't actually migrated or replicated during a validation only task.

  For more information, see [Validation only tasks](CHAP_Validating.md#CHAP_Validating.ValidationOnly).
**Important**  
The `ValidationOnly` setting is immutable. It can't be modified for a task after that task is created.
+ `ValidationPartialLobSize` – Specifies if you want to do partial validation for LOB columns instead of validating all of the data stored in the column. This is something you might find useful when you are migrating just part of the LOB data and not the whole LOB data set. The value is in KB units. The default value is 0, which means AWS DMS validates all the LOB column data. For example, `"ValidationPartialLobSize": 32` means that AWS DMS only validates the first 32KB of the column data in both the source and target.
+ `PartitionSize` – Specifies the batch size of records to read for comparison from both source and target. The default is 10,000.
+ `ValidationQueryCdcDelaySeconds` – The amount of time the first validation query is delayed on both source and target for each CDC update. This might help reduce resource contention when migration latency is high. A validation only task automatically sets this option to 180 seconds. The default is 0.

For example, the following JSON enables data validation with twice the default number of threads. It also accounts for differences in record order caused by column collation differences in PostgreSQL endpoints. Also, it provides a validation reporting delay to account for additional time to process any validation failures.

```
"ValidationSettings": {
     "EnableValidation": true,
     "ThreadCount": 10,
     "HandleCollationDiff": true,
     "RecordFailureDelayLimitInMinutes": 30
  }
```

**Note**  
For an Oracle endpoint, AWS DMS uses DBMS\$1CRYPTO to validate BLOBs. If your Oracle endpoint uses BLOBs, grant the `execute` permission for DBMS\$1CRYPTO to the user account that accesses the Oracle endpoint. To do this, run the following statement.  

```
grant execute on sys.dbms_crypto to dms_endpoint_user;
```

# Data resync settings


The Data resync feature allows you to resync target database with your source database based on data validation report. For more information, see [AWS DMS data validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html).

You can add additional parameters for `ResyncSettings` in the `ReplicationTaskSettings` endpoint that configures the resync process. For more information, see [Task settings example](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html#CHAP_Tasks.CustomizingTasks.TaskSettings.Example) in the [Specifying task settings for AWS Database Migration Service tasks](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html).

**Note**  
`ResyncSchedule` and `MaxResyncTime` parameters are required if the resync process is enabled and the task has a CDC component. They are not valid for full-load only tasks.

The Data resync parameter settings and values are as follows:

`EnableResync`  
Enables Data resync feature when set to `true`. By default, Data resync is disabled.  
**Datatype**: Boolean  
**Required**: No  
**Default**: `false`  
**Validation**: Should not be null if `ResyncSettings` parameter is present in `TaskSettings`.

`ResyncSchedule`  
Time window for the Data resync feature to be in effect. Must be present in Cron format. For more information, see [Cron expression rules](CHAP_Validating.DataResync.md#CHAP_DataResync.cron).  
**Datatype**: String  
**Required**: No  
**Validation**:   
+ Must be present in Cron expression format.
+ Should not be null for tasks with CDC that has `EnableResync` set to `true`.
+ Cannot be set for tasks without CDC component.

`MaxResyncTime`  
Maximum time limit in minutes for the Data resync feature to be in effect.  
**Datatype**: Integer  
**Required**: No  
**Validation**:   
+ Should not be null for tasks with CDC.
+ Not required for tasks without CDC.
+ Minimum value: `5 minutes`, Maximum value: `14400 minutes` (10 days).

`Validation onlyTaskID`  
Unique ID of the validation task. The validation only task ID is appended at the end of an ARN. For example:  
+ Validation only task ARN: `arn:aws:dms:us-west-2:123456789012:task:6DG4CLGJ5JSJR67CFD7UDXFY7KV6CYGRICL6KWI`
+ Validation only task ID: `6DG4CLGJ5JSJR67CFD7UDXFY7KV6CYGRICL6KWI`
**Datatype**: String  
**Required**: No  
**Validation**: Should not be null for tasks with Data resync feature enabled and validation disabled.  
Example:  

```
"ResyncSettings": {
    "EnableResync": true,
    "ResyncSchedule": "30 9 ? * MON-FRI", 
    "MaxResyncTime": 400,  
    "ValidationTaskId": "JXPP94804DJOEWIJD9348R3049"
},
```

# Task settings for change processing DDL handling
Task settings for change processing DDL handling



The following settings determine how AWS DMS handles data definition language (DDL) changes for target tables during change data capture (CDC). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

Task settings to handle change processing DDL include the following:
+ `HandleSourceTableDropped –` Set this option to `true` to drop the target table when the source table is dropped.
+ `HandleSourceTableTruncated` – Set this option to `true` to truncate the target table when the source table is truncated.
+ `HandleSourceTableAltered` – Set this option to `true` to alter the target table when the source table is altered.

Following is an example of how task settings that handle change processing DDL appear in a task setting JSON file:

```
                "ChangeProcessingDdlHandlingPolicy": {
                   "HandleSourceTableDropped": true,
                   "HandleSourceTableTruncated": true,
                   "HandleSourceTableAltered": true
                },
```

**Note**  
For information about which DDL statements are supported for a specific endpoint, see the topic describing that endpoint.

# Character substitution task settings


You can specify that your replication task perform character substitutions on the target database for all source database columns with the AWS DMS `STRING` or `WSTRING` data type. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

You can configure character substitution for any task with endpoints from the following source and target databases:
+ Source databases:
  + Oracle
  + Microsoft SQL Server
  + MySQL
  + MariaDB
  + PostgreSQL
  + SAP Adaptive Server Enterprise (ASE)
  + IBM Db2 LUW
+ Target databases:
  + Oracle
  + Microsoft SQL Server
  + MySQL
  + MariaDB
  + PostgreSQL
  + SAP Adaptive Server Enterprise (ASE)
  + Amazon Redshift

You can specify character substitutions using the `CharacterSetSettings` parameter in your task settings. These character substitutions occur for characters specified using the Unicode code point value in hexadecimal notation. You can implement the substitutions in two phases, in the following order if both are specified:

1. **Individual character replacement** – AWS DMS can replace the values of selected characters on the source with specified replacement values of corresponding characters on the target. Use the `CharacterReplacements` array in `CharacterSetSettings` to select all source characters having the Unicode code points you specify. Use this array also to specify the replacement code points for the corresponding characters on the target. 

   To select all characters on the source that have a given code point, set an instance of `SourceCharacterCodePoint` in the `CharacterReplacements` array to that code point. Then specify the replacement code point for all equivalent target characters by setting the corresponding instance of `TargetCharacterCodePoint` in this array. To delete target characters instead of replacing them, set the appropriate instances of `TargetCharacterCodePoint` to zero (0). You can replace or delete as many different values of target characters as you want by specifying additional pairs of `SourceCharacterCodePoint` and `TargetCharacterCodePoint` settings in the `CharacterReplacements` array. If you specify the same value for multiple instances of `SourceCharacterCodePoint`, the value of the last corresponding setting of `TargetCharacterCodePoint` applies on the target.

   For example, suppose that you specify the following values for `CharacterReplacements`.

   ```
   "CharacterSetSettings": {
       "CharacterReplacements": [ {
           "SourceCharacterCodePoint": 62,
           "TargetCharacterCodePoint": 61
           }, {
           "SourceCharacterCodePoint": 42,
           "TargetCharacterCodePoint": 41
           }
       ]
   }
   ```

   In this example, AWS DMS replaces all characters with the source code point hex value 62 on the target by characters with the code point value 61. Also, AWS DMS replaces all characters with the source code point 42 on the target by characters with the code point value 41. In other words, AWS DMS replaces all instances of the letter `'b'`on the target by the letter `'a'`. Similarly, AWS DMS replaces all instances of the letter `'B'` on the target by the letter `'A'`.

1. **Character set validation and replacement** – After any individual character replacements complete, AWS DMS can make sure that all target characters have valid Unicode code points in the single character set that you specify. You use `CharacterSetSupport` in `CharacterSetSettings` to configure this target character verification and modification. To specify the verification character set, set `CharacterSet` in `CharacterSetSupport` to the character set's string value. (The possible values for `CharacterSet` follow.) You can have AWS DMS modify the invalid target characters in one of the following ways:
   + Specify a single replacement Unicode code point for all invalid target characters, regardless of their current code point. To configure this replacement code point, set `ReplaceWithCharacterCodePoint` in `CharacterSetSupport` to the specified value.
   + Configure the deletion of all invalid target characters by setting `ReplaceWithCharacterCodePoint` to zero (0).

   For example, suppose that you specify the following values for `CharacterSetSupport`.

   ```
   "CharacterSetSettings": {
       "CharacterSetSupport": {
           "CharacterSet": "UTF16_PlatformEndian",
           "ReplaceWithCharacterCodePoint": 0
       }
   }
   ```

   In this example, AWS DMS deletes any characters found on the target that are invalid in the `"UTF16_PlatformEndian"` character set. So, any characters specified with the hex value `2FB6` are deleted. This value is invalid because this is a 4-byte Unicode code point and UTF16 character sets accept only characters with 2-byte code points.

**Note**  
The replication task completes all of the specified character substitutions before starting any global or table-level transformations that you specify through table mapping. For more information about table mapping, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).  
Character substitution doesn't support LOB data types. This includes any datatype that DMS considers to be a LOB data type. For example, the `Extended` datatype in Oracle is considered to be a LOB. For more information about source datatypes, see [Source data types for Oracle](CHAP_Source.Oracle.md#CHAP_Source.Oracle.DataTypes) following. 

The values that AWS DMS supports for `CharacterSet` appear in the table following.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution.html)

# Before image task settings


When writing CDC updates to a data-streaming target like Kinesis or Apache Kafka, you can view a source database row's original values before change by an update. To make this possible, AWS DMS populates a *before image* of update events based on data supplied by the source database engine. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

To do so, you use the `BeforeImageSettings` parameter, which adds a new JSON attribute to every update operation with values collected from the source database system. 

Make sure to apply `BeforeImageSettings` only to full load plus CDC tasks or CDC only tasks. Full load plus CDC tasks migrate existing data and replicate ongoing changes. CDC only tasks replicate data changes only. 

Don't apply `BeforeImageSettings` to tasks that are full load only.

Possible options for `BeforeImageSettings` are the following:
+ `EnableBeforeImage` – Turns on before imaging when set to `true`. The default is `false`. 
+ `FieldName` – Assigns a name to the new JSON attribute. When `EnableBeforeImage` is `true`, `FieldName` is required and can't be empty.
+ `ColumnFilter` – Specifies a column to add by using before imaging. To add only columns that are part of the table's primary keys, use the default value, `pk-only`. To add any column that has a before image value, use `all`. Note that the before image doesn't support large binary object (LOB) data types such as CLOB and BLOB.

The following shows an example of the use of `BeforeImageSettings`. 

```
"BeforeImageSettings": {
    "EnableBeforeImage": true,
    "FieldName": "before-image",
    "ColumnFilter": "pk-only"
  }
```

For information on before image settings for Kinesis, including additional table mapping settings, see [Using a before image to view original values of CDC rows for a Kinesis data stream as a target](CHAP_Target.Kinesis.md#CHAP_Target.Kinesis.BeforeImage).

For information on before image settings for Kafka, including additional table mapping settings, see [Using a before image to view original values of CDC rows for Apache Kafka as a target](CHAP_Target.Kafka.md#CHAP_Target.Kafka.BeforeImage).

# Error handling task settings


You can set the error handling behavior of your replication task using the following settings. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ `DataErrorPolicy` – Determines the action AWS DMS takes when there is an error related to data processing at the record level. Some examples of data processing errors include conversion errors, errors in transformation, and bad data. The default is `LOG_ERROR`.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `DataErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataTruncationErrorPolicy` – Determines the action AWS DMS takes when data is truncated. The default is `LOG_ERROR`.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `DataErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataErrorEscalationPolicy` – Determines the action AWS DMS takes when the maximum number of errors (set in the `DataErrorEscalationCount` parameter) is reached. The default is `SUSPEND_TABLE`.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataErrorEscalationCount` – Sets the maximum number of errors that can occur to the data for a specific record. When this number is reached, the data for the table that contains the error record is handled according to the policy set in the `DataErrorEscalationPolicy`. The default is 0. 
+ `EventErrorPolicy` – Determines the action AWS DMS takes when an error occurs while sending a task-related event. Its possible values are
  + `IGNORE` – The task continues and any data associated with that event is ignored.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `TableErrorPolicy` – Determines the action AWS DMS takes when an error occurs when processing data or metadata for a specific table. This error only applies to general table data and isn't an error that relates to a specific record. The default is `SUSPEND_TABLE`.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `TableErrorEscalationPolicy` – Determines the action AWS DMS takes when the maximum number of errors (set using the `TableErrorEscalationCount` parameter). The default and only user setting is `STOP_TASK`, where the task is stopped and manual intervention is required.
+ `TableErrorEscalationCount` – The maximum number of errors that can occur to the general data or metadata for a specific table. When this number is reached, the data for the table is handled according to the policy set in the `TableErrorEscalationPolicy`. The default is 0. 
+ `RecoverableErrorCount` – The maximum number of attempts made to restart a task when an environmental error occurs. After the system attempts to restart the task the designated number of times, the task is stopped and manual intervention is required. The default value is -1.

  When you set this value to -1, the number of retries that DMS attempts varies based on the returned error type as follows:
  + **Running state, recoverable error**: If a recoverable error such as a lost connection or a target apply fail occurs, DMS retries the task nine times.
  + **Starting state, recoverable error**: DMS retries the task six times.
  + **Running state, fatal error handled by DMS**: DMS retries the task six times.
  + **Running state, fatal error not handled by DMS**: DMS does not retry the task.
  + **Other than above**: AWS DMS retries the task indefinitely.

  Set this value to 0 to never attempt to restart a task. 

  We recommend that you set `RecoverableErrorCount` and `RecoverableErrorInterval` to values such that there are sufficient retries at sufficient intervals for your DMS task to recover properly. If a fatal error occurs, DMS stops making restart attempts in most scenarios.
+ `RecoverableErrorInterval` – The number of seconds that AWS DMS waits between attempts to restart a task. The default is 5. 
+ `RecoverableErrorThrottling` – When enabled, the interval between attempts to restart a task is increased in a series based on the value of `RecoverableErrorInterval`. For example, if `RecoverableErrorInterval` is set to 5 seconds, then the next retry will happen after 10 seconds, then 20, then 40 seconds and so on. The default is `true`. 
+ `RecoverableErrorThrottlingMax` – The maximum number of seconds that AWS DMS waits between attempts to restart a task if `RecoverableErrorThrottling` is enabled. The default is 1800. 
+ `RecoverableErrorStopRetryAfterThrottlingMax`– Default value is set to `true`, and DMS stops resuming the task after the maximum number of seconds that AWS DMS waits between recovery attempts is reached, per `RecoverableErrorStopRetryAfterThrottlingMax`. When set to `false`, DMS keeps resuming the task after the maximum number of seconds that AWS DMS waits between recovery attempts is reached, per `RecoverableErrorStopRetryAfterThrottlingMax` until `RecoverableErrorCount` is reached.
+ `ApplyErrorDeletePolicy` – Determines what action AWS DMS takes when there is a conflict with a DELETE operation. The default is `IGNORE_RECORD`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `ApplyErrorInsertPolicy` – Determines what action AWS DMS takes when there is a conflict with an INSERT operation. The default is `LOG_ERROR`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
  + `INSERT_RECORD` – If there is an existing target record with the same primary key as the inserted source record, the target record is updated.
**Note**  
**In Transactional Apply mode**: In this process, the system first attempts to insert the record. If the insert fails due to a primary key conflict, it deletes the existing record and then inserts the new one. 
**In Batch Apply mode**: The process removes all existing records in the target batch before inserting the complete set of new records, ensuring a clean replacement of data.
This process prevents data duplication, but incurs some performance cost compared to the default policy. The exact performance impact depends on your specific workload characteristics.
+ `ApplyErrorUpdatePolicy` – Determines what action AWS DMS takes when there is a missing data conflict with an UPDATE operation. The default is `LOG_ERROR`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
  + `UPDATE_RECORD` – If the target record is missing, the missing target record is inserted into the target table. AWS DMS completely disables LOB column support for the task. Selecting this option requires full supplemental logging to be enabled for all the source table columns when Oracle is the source database.
**Note**  
**In Transactional Apply mode**: In this process, the system first attempts to update the record. If the update fails due to a missing record on target, it run a delete for the failed record and then inserts the new one. This process requires full supplemental logging for Oracle source databases and DMS disables LOB column support for this task.
**In Batch Apply mode**: The process removes all existing records in the target batch before inserting the complete set of new records, ensuring a clean replacement of data.
+ `ApplyErrorEscalationPolicy` – Determines what action AWS DMS takes when the maximum number of errors (set using the `ApplyErrorEscalationCount` parameter) is reached. The default is LOG\$1ERROR:
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `ApplyErrorEscalationCount` – This option sets the maximum number of APPLY conflicts that can occur for a specific table during a change process operation. When this number is reached, the table data is handled according to the policy set in the `ApplyErrorEscalationPolicy` parameter. The default is 0. 
+ `ApplyErrorFailOnTruncationDdl` – Set this option to `true` to cause the task to fail when a truncation is performed on any of the tracked tables during CDC. The default is `false`. 

  This approach doesn't work with PostgreSQL version 11.x or lower, or any other source endpoint that doesn't replicate DDL table truncation.
+ `FailOnNoTablesCaptured` – Set this option to `true` to cause a task to fail when the table mappings defined for a task find no tables when the task starts. The default is `true`.
+ `FailOnTransactionConsistencyBreached` – This option applies to tasks using Oracle as a source with CDC. The default is false. Set it to `true` to cause a task to fail when a transaction is open for more time than the specified timeout and can be dropped. 

  When a CDC task starts with Oracle, AWS DMS waits for a limited time for the oldest open transaction to close before starting CDC. If the oldest open transaction doesn't close until the timeout is reached, then in most cases AWS DMS starts CDC, ignoring that transaction. If this option is set to `true`, the task fails.
+ `FullLoadIgnoreConflicts` – Set this option to `true` to have AWS DMS ignore "zero rows affected" and "duplicates" errors when applying cached events. If set to `false`, AWS DMS reports all errors instead of ignoring them. The default is `true`. 
+ `DataMaskingErrorPolicy` – Determines the action AWS DMS takes when the data masking is failed due to incompatible data type or any other reason. The follwing are available options:
  + `STOP_TASK` (Default) – The task stops and manual intervention is required.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored.
  + `LOG_ERROR` – The task continues and the error is written to the task log. Unmasked data will be loaded in target table.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.

**Note**  
 Table load errors in Redshift as a target are reported in `STL_LOAD_ERRORS`. For more information, see [STL\$1LOAD\$1ERRORS](https://docs.aws.amazon.com/redshift/latest/dg/r_STL_LOAD_ERRORS.html) in the *Amazon Redshift Database Developer Guide*.

**Note**  
Parameter changes related to recoverable errors only take effect after you stop and resume the DMS task. Changes do not apply to the current execution.

# Saving task settings


You can save task settings as a JSON file in case you want to reuse the settings for another task. You can find tasks settings to copy to a JSON file under the **Overview details** section of a task.

**Note**  
While reusing task settings for other tasks, remove any `CloudWatchLogGroup` and `CloudWatchLogStream` attributes. Otherwise, the following error is given: SYSTEM ERROR MESSAGE:Task Settings CloudWatchLogGroup or CloudWatchLogStream cannot be set on create.

For example, the following JSON file contains settings saved for a task.

```
{
    "TargetMetadata": {
        "TargetSchema": "",
        "SupportLobs": true,
        "FullLobMode": false,
        "LobChunkSize": 0,
        "LimitedSizeLobMode": true,
        "LobMaxSize": 32,
        "InlineLobMaxSize": 0,
        "LoadMaxFileSize": 0,
        "ParallelLoadThreads": 0,
        "ParallelLoadBufferSize": 0,
        "BatchApplyEnabled": false,
        "TaskRecoveryTableEnabled": false,
        "ParallelLoadQueuesPerThread": 0,
        "ParallelApplyThreads": 0,
        "ParallelApplyBufferSize": 0,
        "ParallelApplyQueuesPerThread": 0
    },
    "FullLoadSettings": {
        "TargetTablePrepMode": "DO_NOTHING",
        "CreatePkAfterFullLoad": false,
        "StopTaskCachedChangesApplied": false,
        "StopTaskCachedChangesNotApplied": false,
        "MaxFullLoadSubTasks": 8,
        "TransactionConsistencyTimeout": 600,
        "CommitRate": 10000
    },
    "Logging": {
        "EnableLogging": true,
        "LogComponents": [
            {
                "Id": "TRANSFORMATION",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SOURCE_UNLOAD",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "IO",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TARGET_LOAD",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "PERFORMANCE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SOURCE_CAPTURE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SORTER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "REST_SERVER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "VALIDATOR_EXT",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TARGET_APPLY",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TASK_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TABLES_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "METADATA_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "FILE_FACTORY",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "COMMON",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "ADDONS",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "DATA_STRUCTURE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "COMMUNICATION",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "FILE_TRANSFER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            }
        ]
    },
    "ControlTablesSettings": {
        "ControlSchema": "",
        "HistoryTimeslotInMinutes": 5,
        "HistoryTableEnabled": false,
        "SuspendedTablesTableEnabled": false,
        "StatusTableEnabled": false,
        "FullLoadExceptionTableEnabled": false
    },
    "StreamBufferSettings": {
        "StreamBufferCount": 3,
        "StreamBufferSizeInMB": 8,
        "CtrlStreamBufferSizeInMB": 5
    },
    "ChangeProcessingDdlHandlingPolicy": {
        "HandleSourceTableDropped": true,
        "HandleSourceTableTruncated": true,
        "HandleSourceTableAltered": true
    },
    "ErrorBehavior": {
        "DataErrorPolicy": "LOG_ERROR",
        "DataTruncationErrorPolicy": "LOG_ERROR",
        "DataErrorEscalationPolicy": "SUSPEND_TABLE",
        "DataErrorEscalationCount": 0,
        "TableErrorPolicy": "SUSPEND_TABLE",
        "TableErrorEscalationPolicy": "STOP_TASK",
        "TableErrorEscalationCount": 0,
        "RecoverableErrorCount": -1,
        "RecoverableErrorInterval": 5,
        "RecoverableErrorThrottling": true,
        "RecoverableErrorThrottlingMax": 1800,
        "RecoverableErrorStopRetryAfterThrottlingMax": true,
        "ApplyErrorDeletePolicy": "IGNORE_RECORD",
        "ApplyErrorInsertPolicy": "LOG_ERROR",
        "ApplyErrorUpdatePolicy": "LOG_ERROR",
        "ApplyErrorEscalationPolicy": "LOG_ERROR",
        "ApplyErrorEscalationCount": 0,
        "ApplyErrorFailOnTruncationDdl": false,
        "FullLoadIgnoreConflicts": true,
        "FailOnTransactionConsistencyBreached": false,
        "FailOnNoTablesCaptured": true
    },
    "ChangeProcessingTuning": {
        "BatchApplyPreserveTransaction": true,
        "BatchApplyTimeoutMin": 1,
        "BatchApplyTimeoutMax": 30,
        "BatchApplyMemoryLimit": 500,
        "BatchSplitSize": 0,
        "MinTransactionSize": 1000,
        "CommitTimeout": 1,
        "MemoryLimitTotal": 1024,
        "MemoryKeepTime": 60,
        "StatementCacheSize": 50
    },
    "PostProcessingRules": null,
    "CharacterSetSettings": null,
    "LoopbackPreventionSettings": null,
    "BeforeImageSettings": null,
    "FailTaskWhenCleanTaskResourceFailed": false
}
```