Menu
AWS Database Migration Service
User Guide (Version API Version 2016-01-01)

Using Extra Connection Attributes with AWS Database Migration Service

You can specify additional connection attributes when creating an endpoint for AWS Database Migration Service. The following database engine specific sections show possible settings.

MySQL

Role Name Description
Source eventsPollInterval

Specifies how often to check the binary log for new changes/events when the database is idle.

Default value: 5

Valid values: 1 - 60

Example: eventsPollInterval=5

initstmt=SET time-zone

Specifies the time zone for the source MySQL database.

Default value: UTC

Valid values: Any three or four character abbreviation for the time zone you want to use. Valid values are the standard time zone abbreviations for the operating system hosting the source MySQL database.

Example: initstmt=SET time_zone=UTC

afterConnectScript

Specifies a script to run immediately after AWS DMS connects to the endpoint.

Valid values: Any SQL statement separated by a semi-colon.

Example: afterConnectScript=ALTER SESSION SET CURRENT_SCHEMA = system;

Target

targetDbType

Specifies where to migrate source tables on the target, either to a single database or multiple databases.

Default value: MULTIPLE_DATABASES

Valid values: {SPECIFIC_DATABASE, MULTIPLE_DATABASES}

Example: targetDbType=MULTIPLE_DATABASES

parallelLoadThreads

Improves performance when loading data into the MySQL target database. Specifies how many threads to use to load the data into the MySQL target database. Note that setting a large number of threads may have an adverse effect on database performance since a separate connection is required for each thread.

Default value: 2

Valid values: 1-5

Example: parallelLoadThreads=1

initstmt=SET FOREIGN_KEY_CHECKS=0

Disables foreign key checks.

initstmt=SET time-zone

Specifies the time zone for the target MySQL database.

Default value: UTC

Valid values: A three or four character abbreviation for the time zone you want to use. Valid values are the standard time zone abbreviations for the operating system hosting the target MySQL database.

Example: initstmt=SET time_zone=UTC

afterConnectScript=SET character_set_connection='latin1'

Improves the performance of certain operations on the target MySQL database when converting from Latin1 to UTF8.

maxFileSize

Specifies the maximum size (in KB) of any CSV file used to transfer data to MySQL.

Default value: 32768 KB (32 MB)

Valid values: 1 - 1048576

Example: maxFileSize=512

PostgreSQL

Role Name Description
Source

captureDDLs

In order to capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts as described in the section Removing AWS DMS Artifacts from a PostgreSQL Source Database.

If this value is set to N, you do not have to create table/triggers on the source database. For more information, see Migrating an Amazon RDS for PostgreSQL Database Without Using the Master User Account

Streamed DDL events are captured.

Default value: Y

Valid values: Y/N

Example: captureDDLs=Y

ddlArtifactsSchema

The schema in which the operational DDL database artifacts are created.

Default value: public

Valid values: String

Example: ddlArtifactsSchema=xyzddlschema

failTasksOnLobTruncation

When set to true, causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize.

If task is set to Limited LOB mode and this option is set to true, the task will fail instead of truncating the LOB data.

Default value: false

Valid values: Boolean

Example: failTasksOnLobTruncation=true

Target

maxFileSize

Specifies the maximum size (in KB) of any CSV file used to transfer data to PostgreSQL.

Default value: 32768 KB (32 MB)

Valid values: 1 - 1048576

Example: maxFileSize=512

Oracle

Role Name Description
Source

addSupplementalLogging

Set this attribute to automatically set up supplemental logging for the Oracle database.

Default value: N

Valid values: Y/N

Example: addSupplementalLogging=Y

Note

If you use this option, you still need to enable supplemental logging at the database level using the following statement:

Copy
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

useLogminerReader

Set this attribute to Y to capture change data using the LogMiner utility (the default). Set this option to N if you want AWS DMS to access the redo logs as a binary file. When set to N, you must also add the setting useBfile=Y. For more information, see Using Oracle LogMiner or Oracle Binary Reader for Change Data Capture (CDC).

Default value: Y

Valid values: Y/N

Example: useLogminerReader=N; useBfile=Y

If the Oracle source database is using Oracle ASM (Automatic Storage Management), the extra connection parameter needs to include the asm username and asm server address. The password field will also need to have both passwords, the source user password, as well as the ASM password.

Example: useLogminerReader=N;asm_user=<asm_username>; asm_server=<first_RAC_server_ip_address>/+ASM

If the Oracle source database is using Oracle ASM (Automatic Storage Management), the endpoint password field needs to have both the Oracle user password and the ASM password, separated by a comma.

Example: <oracle_user_password>,<asm_user_password>

retryInterval

Specifies the number of seconds that the system waits before resending a query.

Default value: 5

Valid values: number starting from 1

Example: retryInterval=6

archivedLogDestId

Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the $archived_log table. When working with multiple log destinations (DEST_ID), we recommended that you to specify an Archived redo logs location identifier. This will improve performance by ensuring that the correct logs are accessed from the outset.

Default value:0

Valid values: Number

Example: archivedLogDestId=1

archivedLogsOnly

When this field is set to Y, AWS DMS will only access the archived redo logs. If the archived redo logs ares stored on ASM only, the AWS DMS user needs to be granted the ASM privileges.

Default value: N

Valid values: Y/N

Example: archivedLogDestId=Y

numberDataTypeScale

Specifies the Number scale. You can select a scale up to 38 or you can select FLOAT. By default the NUMBER data type is converted to precision 38, scale 10.

Default value: 10

Valid values: -1 to 38 (-1 for FLOAT)

Example: numberDataTypeScale =12

afterConnectScript

Specifies a script to run immediately after AWS DMS connects to the endpoint.

Valid values: Any SQL statement separated by a semi-colon.

Example: afterConnectScript=ALTER SESSION SET CURRENT_SCHEMA = system;

failTasksOnLobTruncation

When set to true, causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize.

If task is set to Limited LOB mode and this option is set to true, the task will fail instead of truncating the LOB data.

Default value: false

Valid values: Boolean

Example: failTasksOnLobTruncation=true

Target

useDirectPathFullLoad

Use direct path full load, specify this to enable/disable the OCI direct path protocol for bulk loading Oracle tables.

Default value: Y

Valid values: Y/N

Example: useDirectPathFullLoad=N

charLengthSemantics

Column length semantics specifies whether the length of a column is in bytes or in characters. Set this value to CHAR.

Example: charLengthSemantics=CHAR

SQL Server

Role Name Description
Source

safeguardPolicy

For optimal performance, AWS DMS tries to capture all unread changes from the active transaction log (TLOG). However, sometimes due to truncation, the active TLOG may not contain all of the unread changes. When this occurs, AWS DMS accesses the backup log to capture the missing changes. To minimize the need to access the backup log, AWS DMS prevents truncation using one of the following methods:

1. Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started will not be truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.

2. Exclusively use sp_repldone within a single task: When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method does not involve any transactional activities, it can only be used when Microsoft Replication is not running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.

Default value: RELY_ON_SQL_SERVER_REPLICATION_AGENT

Valid values: {EXCLUSIVE_AUTOMATIC_TRUNCATION, RELY_ON_SQL_SERVER_REPLICATION_AGENT}

Example: safeguardPolicy= RELY_ON_SQL_SERVER_REPLICATION_AGENT

Target

useBCPFullLoad

Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.

Default value: Y

Valid values: Y/N

Example: useBCPFullLoad=Y

BCPPacketSize

The maximum size of the packets (in bytes) used to transfer data using BCP.

Default value: 16384

Valid values: 1 - 100000

Eg : BCPPacketSize=16384

controlTablesFileGroup

Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) will be created on the specified filegroup.

Default value: n/a

Valid values: String

Example: controlTablesFileGroup=filegroup1  

The following is an example of a command for creating a filegroup:  

Copy
ALTER DATABASE replicate ADD FILEGROUP Test1FG1; GO ALTER DATABASE replicate ADD FILE (        NAME = test1dat5,        FILENAME = 'C:\temp\DATA\t1dat5.ndf',        SIZE = 5MB,        MAXSIZE = 100MB,        FILEGROWTH = 5MB    )    TO FILEGROUP Test1FG1;    GO

Amazon Redshift

Role Name Description
Target

maxFileSize

Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift.

Default value: 32768 KB (32 MB)

Valid values: 1 - 1048576

Example: maxFileSize=512

fileTransferUploadStreams

Specifies the number of threads used to upload a single file.

Default value: 10

Valid values: 1 - 64

Example: fileTransferUploadStreams=20

SAP Adaptive Server Enterprise (ASE)

Role Name Description
Target

enableReplication

Set to Y to automatically enable SAP ASE replication. This is only required if SAP ASE replication has not been enabled already.

additionalConnectionProperties

Any additional ODBC connection parameters that you want to specify.

Note

If the user name or password specified in the connection string contains non-Latin characters (for example, Chinese), the following property is required: charset=gb18030