Using an Oracle Database as a Source for AWS DMS - AWS Database Migration Service

Using an Oracle Database as a Source for AWS DMS

You can migrate data from one or many Oracle databases using AWS DMS. With an Oracle database as a source, you can migrate data to any of the targets supported by AWS DMS.

DMS supports the following Oracle database editions:

  • Oracle Enterprise Edition

  • Oracle Standard Edition

  • Oracle Express Edition

  • Oracle Personal Edition

For self-managed Oracle databases as sources, AWS DMS supports all Oracle database editions for versions 10.2 and later, 11g and up to 12.2, 18c, and 19c. For Amazon-managed Oracle databases provided by Amazon RDS, AWS DMS supports all Oracle database editions for versions 11g (versions 11.2.0.3.v1 and later) and up to 12.2, 18c, and 19c.

Note
  • Support for Oracle version 19c as a source is available in AWS DMS versions 3.3.2 and later.

  • Support for Oracle version 18c as a source is available in AWS DMS versions 3.3.1 and later.

You can use Secure Sockets Layer (SSL) to encrypt connections between your Oracle endpoint and your replication instance. For more information on using SSL with an Oracle endpoint, see Using SSL With AWS Database Migration Service. AWS DMS also supports the use of Oracle transparent data encryption (TDE) to encrypt data at rest in the source database. For more information on using Oracle TDE with an Oracle source endpoint, see Supported Encryption Methods for Using Oracle as a Source for AWS DMS.

The steps to configure an Oracle database as a source for AWS DMS source are as follows:

  1. If you want to create a CDC-only or full load plus CDC task, choose either Oracle LogMiner or AWS DMS Binary Reader to capture data changes. Choosing LogMiner or Binary Reader determines some of the later permission and configuration tasks. For a comparison of LogMiner and Binary Reader, see the following section.

  2. Create an Oracle user with the appropriate permissions for AWS DMS. If you are creating a full-load-only task, no further configuration is needed.

  3. Create a DMS endpoint that conforms with your chosen configuration.

For additional details on working with Oracle databases and AWS DMS, see the following sections.

Using Oracle LogMiner or AWS DMS Binary Reader for Change Data Capture (CDC)

Oracle offers two methods for reading the redo logs when doing change processing: Oracle LogMiner and AWS DMS Binary Reader. LogMiner is an Oracle API to read the online redo logs and archived redo log files. Binary Reader is an AWS DMS native method that reads and parses the raw redo log files directly.

By default, AWS DMS uses Oracle LogMiner for change data capture (CDC).

The advantages of using LogMiner with AWS DMS include the following:

  • LogMiner supports most Oracle options, such as encryption options and compression options. Binary Reader doesn't support all Oracle options, particularly compression and most options for encryption.

  • LogMiner offers a simpler configuration, especially compared to Binary Reader's direct-access setup or when the redo logs are managed using Oracle Automatic Storage Management (ASM).

  • LogMiner supports the following Hybrid Columnar Compression (HCC) types for both full load and on-going replication (CDC):

    • QUERY HIGH

    • ARCHIVE HIGH

    • ARCHIVE LOW

    • QUERY LOW

    Binary Reader supports QUERY LOW compression only for full load replications, not ongoing (CDC) replications.

  • LogMiner supports table clusters for use by AWS DMS. Binary Reader doesn't.

The advantages of using Binary Reader with AWS DMS, instead of LogMiner, include the following:

  • For migrations with a high volume of changes, LogMiner might have some I/O or CPU impact on the computer hosting the Oracle source database. Binary Reader has less chance of having I/O or CPU impact because the archive logs are copied to the replication instance and mined there.

  • For migrations with a high volume of changes, CDC performance is usually much better when using Binary Reader compared with using Oracle LogMiner.

  • Binary Reader supports CDC for LOBs in Oracle version 12c. LogMiner doesn't.

  • Binary Reader supports the following HCC compression types for both full load and continuous replication (CDC):

    • QUERY HIGH

    • ARCHIVE HIGH

    • ARCHIVE LOW

      The QUERY LOW compression type is only supported for full load migrations.

In general, use Oracle LogMiner for migrating your Oracle database unless you have one of the following situations:

  • You need to run several migration tasks on the source Oracle database.

  • The volume of changes or the redo log volume on the source Oracle database is high, or you have changes and are also using ASM.

  • Your workload includes UPDATE statements that update only large object (LOB) columns. In this case, use Binary Reader. These UPDATE statements aren't supported by Oracle LogMiner.

  • You are migrating LOB columns from Oracle 12c. For Oracle 12c, LogMiner doesn't support LOB columns. So, in this case, use Binary Reader.

Note

If you change between using Oracle LogMiner and AWS DMS Binary Reader to perform change data capture (CDC), you must restart the task.

Configuration for Change Data Capture (CDC) on an Oracle Source Database

When you use Oracle as a source endpoint either for full-load and change data capture (CDC) or for CDC only, you set extra connection attributes. These attributes specify whether you want to use Binary Reader to access the transaction logs. You specify extra connection attributes when you create the source endpoint. If you have multiple connection attribute settings, separate them from each other by semicolons with no additional white space (for example, oneSetting;thenAnother).

LogMiner is used by default, so you don't have to explicitly specify its use. To use Binary Reader to access the transaction logs, add the following extra connection attributes.

useLogMinerReader=N;useBfile=Y;
Note

You can configure a CDC load for an Amazon RDS for Oracle as a source instance using the Binary Reader only for Oracle versions 11.2.0.4.v11 and later, and 12.1.0.2.v7. For an Amazon RDS for Oracle version 12.2 or 18c instance, use AWS DMS version 3.3.1. For more information, see Working with an Amazon-Managed Oracle Database as a Source for AWS DMS.

In AWS DMS versions earlier than 3.x where the Oracle source database uses Oracle ASM, using the Binary Reader requires additional attributes. In addition to the preceding, also make sure that you specify attributes for the ASM user name and ASM server address. When you create the source endpoint, the Password request parameter must specify both passwords: the Oracle source endpoint password and the ASM password.

Use the following format for the extra connection attributes to access a server that uses ASM with Binary Reader.

useLogMinerReader=N;useBfile=Y;asm_user=asm_username;asm_server=RAC_server_ip_address:port_number/+ASM;

Set the source endpoint Password request parameter to both the Oracle user password and the ASM password, separated by a comma as follows.

oracle_user_password,asm_user_password

In AWS DMS versions 3.x or later where the Oracle source uses ASM, you can work with high-performance options in Binary Reader for transaction processing at scale. In this case, the replication instance supports these high-performance options. These options include extra connection attributes to specify the number of parallel threads (parallelASMReadThreads) and the number of read-ahead buffers (readAheadBlocks). Setting these attributes together can significantly improve the performance of a CDC load using ASM. The settings shown following provide good results for most ASM configurations.

useLogMinerReader=N;useBfile=Y;parallelASMReadThreads=6;readAheadBlocks=150000;

For more information on values that extra connection attributes support, see Extra Connection Attributes When Using Oracle as a Source for AWS DMS.

In addition, the performance of a CDC load using ASM depends on other settings that you choose. These settings include your AWS DMS extra connection attributes and the SQL settings to configure the Oracle source. If you are using AWS DMS versions 2.4.x, the above performance parameters do not apply. For these AWS DMS versions, see the How to Migrate from Oracle ASM to AWS using AWS DMS post on the AWS Database Blog for more information on using ASM with your Oracle source endpoint.

Working with a Self-Managed Oracle Database as a Source for AWS DMS

A self-managed database is a database that you configure and control, either a local on-premises database instance or a database on Amazon EC2. Following, you can find out about the privileges and configurations you need to set up when using a self-managed Oracle database with AWS DMS.

User Account Privileges Required on a Self-Managed Oracle Source for AWS DMS

To use an Oracle database as a source in AWS DMS, grant the privileges following to the Oracle user specified in the Oracle endpoint connection settings.

Note

When granting privileges, use the actual name of objects, not the synonym for each object. For example, use V_$OBJECT including the underscore, not V$OBJECT without the underscore.

  • SELECT ANY TRANSACTION

  • SELECT on V_$ARCHIVED_LOG

  • SELECT on V_$LOG

  • SELECT on V_$LOGFILE

  • SELECT on V_$DATABASE

  • SELECT on V_$THREAD

  • SELECT on V_$PARAMETER

  • SELECT on V_$NLS_PARAMETERS

  • SELECT on V_$TIMEZONE_NAMES

  • SELECT on V_$TRANSACTION

  • SELECT on ALL_INDEXES

  • SELECT on ALL_OBJECTS

  • SELECT on DBA_OBJECTS – Required if the Oracle version is earlier than 11.2.0.3.

Grant the additional privilege following for each replicated table when you are using a specific table list.

SELECT on any-replicated-table;

Grant the additional privilege following when using a pattern for the table list.

SELECT ANY TABLE;

Grant the additional privilege following for each replicated table when AWS DMS adds supplemental logging automatically (the default behavior) and you are using a specific table list. For information on how to turn off supplemental logging, see Extra Connection Attributes When Using Oracle as a Source for AWS DMS.

ALTER on any-replicated-table;;

Grant the additional privilege following when AWS DMS adds supplemental logging automatically (the default behavior). For information on how to turn off supplemental logging, see Extra Connection Attributes When Using Oracle as a Source for AWS DMS.

ALTER ANY TABLE;

When accessing an Oracle standby database, grant the privilege following.

SELECT on V$STANDBY_LOG;

Account Privileges Required When Using Oracle LogMiner to Access the Redo Logs

To access the redo logs using the Oracle LogMiner, grant the privileges following to the Oracle user specified in the Oracle endpoint connection settings:

  • CREATE SESSION

  • EXECUTE on DBMS_LOGMNR

  • SELECT on V_$LOGMNR_LOGS

  • SELECT on V_$LOGMNR_CONTENTS

  • GRANT LOGMINING – Required only if the Oracle version is 12c or later.

Account Privileges Required When Using AWS DMS Binary Reader to Access the Redo Logs

To access the redo logs using the AWS DMS Binary Reader, grant the privileges following to the Oracle user specified in the Oracle endpoint connection settings:

  • CREATE SESSION

  • SELECT on v_$transportable_platform – Grant this privilege if the redo logs are stored in Oracle Automatic Storage Management (ASM) and AWS DMS accesses them from ASM.

  • CREATE ANY DIRECTORY – Grant this privilege to allow AWS DMS to use Oracle BFILE read file access in certain cases. This access is required when the replication instance doesn't have file-level access to the redo logs and the redo logs are on non-ASM storage.

  • EXECUTE on DBMS_FILE_TRANSFER package – Grant this privilege to copy the redo log files to a temporary folder using the CopyToTempFolder method.

  • EXECUTE on DBMS_FILE_GROUP

Binary Reader works with Oracle file features that include Oracle directories. Each Oracle directory object includes the name of the folder containing the redo log files to process. These Oracle directories aren't represented at the file system level. Instead, they are logical directories that are created at the Oracle database level. You can view them in the Oracle ALL_DIRECTORIES view.

If you want AWS DMS to create these Oracle directories, grant the CREATE ANY DIRECTORY privilege specified preceding. AWS DMS creates the directory names with the DMS_ prefix. If you don't grant the CREATE ANY DIRECTORY privilege, create the corresponding directories manually. In some cases when you create the Oracle directories manually, the Oracle user specified in the Oracle source endpoint isn't the user that created these directories. In these cases, also grant the READ on DIRECTORY privilege.

If the Oracle source endpoint is in Active Dataguard Standby (ADG), see the How to Use Binary Reader with ADG post on the AWS Database Blog.

In some cases, you might use Oracle Managed Files (OMF) for storing the logs. Or your source endpoint is in ADG and the CREATE ANY DIRECTORY privilege can't be granted. In these cases, manually create the directories with all the possible log locations before starting the AWS DMS replication task. If AWS DMS doesn't find a precreated directory that it expects, the task stops. Also, AWS DMS doesn't delete the entries it has created in the ALL_DIRECTORIES view, so manually delete them.

Account Privileges Required When Using Binary Reader with Oracle Automatic Storage Management (ASM)

To access the redo logs in Automatic Storage Management (ASM) using Binary Reader, grant the privileges following to the Oracle user specified in the Oracle endpoint connection settings:

  • SELECT ON v_$transportable_platform

  • SYSASM – To access the ASM account with Oracle 11g Release 2 (version 11.2.0.2) and later, grant the Oracle endpoint user the SYSASM privilege. For older supported Oracle versions, it's typically sufficient to grant the Oracle endpoint user the SYSDBA privilege.

You can validate ASM account access by opening a command prompt and invoking one of the statements following, depending on your Oracle version as specified preceding.

If you need the SYSDBA privilege, use the following.

sqlplus asmuser/asmpassword@+asmserver as sysdba

If you need the SYSASM privilege, use the following.

sqlplus asmuser/asmpassword@+asmserver as sysasm

Configuring a Self-Managed Oracle Source for Replication with AWS DMS

Following are configuration requirements for using a self-managed Oracle database as a source in AWS DMS:

Providing Oracle Account Access

To provide Oracle account access to the AWS DMS user

Setting Logs to ARCHIVELOG Mode

To set logs to ARCHIVELOG mode

You can also set up and verify supplemental logging, as described following.

Verifying and Setting Up Supplemental Logging

To set up supplemental logging, take the following steps, described in more detail later in this section:

  1. Verify that supplemental logging is enabled for the database.

  2. Verify that the required supplemental logging is enabled for each table.

  3. If a filter or transformation is defined for a table, enable additional logging as needed.

To verify and, if needed, enable supplemental logging for the database

  1. Run the sample query following to verify that the current version of the Oracle database is supported by AWS DMS. If the query runs without error, the returned database version is supported.

    SELECT name, value, description FROM v$parameter WHERE name = 'compatible';

    Here, name, value, and description are columns somewhere in the database that are being queried based on the value of name. As part of the query, an AWS DMS task checks the value of v$parameter against the returned version of the Oracle database. If there is a match, the query runs without error and AWS DMS supports this version of the database. If there isn't a match, the query raises an error and AWS DMS doesn't support this version of the database. In that case, to proceed with migration, first convert the Oracle database to a AWS DMS-supported version. Then, start again with configuring the database as described in Configuring a Self-Managed Oracle Source for Replication with AWS DMS.

  2. Run the query following to verify that supplemental logging is enabled for the database. If the returned result is YES or IMPLICIT, supplemental logging is enabled for the database.

    SELECT supplemental_log_data_min FROM v$database;
  3. If needed, enable supplemental logging for the database by running the command following.

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    Note

    If your Oracle database instance is on Amazon RDS, run a different command. For more information, seeWorking with an Amazon-Managed Oracle Database as a Source for AWS DMS.

To verify that the required supplemental logging is enabled for each database table

  • Do one of the following:

    • If a primary key exists, add supplemental logging for the primary key. You can do this either by using the format to add supplemental logging on the primary key, or by adding supplemental logging on the primary key columns.

    • If no primary key exists, do one of the following:

      • If no primary key exists and the table has a single unique index, add all of the unique index's columns to the supplemental log.

        Using SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS doesn't add the unique index columns to the log.

      • If no primary key exists and the table has multiple unique indexes, add all of the columns for the unique index that AWS DMS selects to the supplemental log. AWS DMS selects the first unique index from an alphabetically ordered, ascending list.

      • If no primary key exists and there is no unique index, add supplemental logging on all columns.

        Using SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS doesn't add the selected unique index columns to the log.

Note

In some cases, the target table primary key or unique index are different than the source table primary key or unique index. In these cases, add supplemental logging manually on the source table columns that make up the target table primary key or unique index.

Also, if you change the target table primary key, add supplemental logging on the target unique index's columns instead of the columns of the source primary key or unique index.

If a filter or transformation is defined for a table, you might need to enable additional logging. Some considerations are as follows:

  • If ALL COLUMNS supplemental logging has been added to the table, there is no need to add any additional logging.

  • If the table has a unique index or a primary key, add supplemental logging on each column that is involved in a filter or transformation, but only if those columns are different from the primary key or unique index columns.

  • If a transformation uses only one column, don't add this column to a supplemental logging group. For example, for a transformation A+B, add supplemental logging on both columns A and B. However, for a transformation substring(A,10) don't add supplemental logging on column A.

  • One method of setting up supplemental logging on both primary key or unique index columns and other specific columns that are filtered or transformed is to add USER_LOG_GROUP supplemental logging. Add this USER_LOG_GROUP supplemental logging on both the primary key/unique index columns and the other specific columns that are filtered or transformed.

    For example, to replicate a table named TEST.LOGGING with primary key ID and a filter by the column NAME, you can run a command similar to the one following to create the log group supplemental logging.

    ALTER TABLE TEST.LOGGING ADD SUPPLEMENTAL LOG GROUP TEST_LOG_GROUP (KEY, ADDRESS) ALWAYS;

    In this example, the NAME column is a component of ADDRESS.

Working with an Amazon-Managed Oracle Database as a Source for AWS DMS

An Amazon-managed database is a database that is on an Amazon service such as Amazon RDS, Amazon Aurora, or Amazon S3. Following, you can find the privileges and configurations that you need to set up when using an Amazon-managed Oracle database with AWS DMS.

User Account Privileges Required on an Amazon-Managed Oracle Source for AWS DMS

To grant privileges on Oracle databases on Amazon RDS, use the stored procedure rdsadmin.rdsadmin_util.grant_sys_object. For more information, see Granting SELECT or EXECUTE privileges to SYS Objects.

Grant the following privileges to the AWS DMS user account that is used to access the source Oracle endpoint:

GRANT SELECT ANY TABLE to dms_user; GRANT SELECT on ALL_VIEWS to dms_user; GRANT SELECT ANY TRANSACTION to dms_user; GRANT SELECT on DBA_TABLESPACES to dms_user; GRANT SELECT on ALL_TAB_PARTITIONS to dms_user; GRANT SELECT on ALL_INDEXES to dms_user; GRANT SELECT on ALL_OBJECTS to dms_user; GRANT SELECT on ALL_TABLES to dms_user; GRANT SELECT on ALL_USERS to dms_user; GRANT SELECT on ALL_CATALOG to dms_user; GRANT SELECT on ALL_CONSTRAINTS to dms_user; GRANT SELECT on ALL_CONS_COLUMNS to dms_user; GRANT SELECT on ALL_TAB_COLS to dms_user; GRANT SELECT on ALL_IND_COLUMNS to dms_user; GRANT SELECT on ALL_LOG_GROUPS to dms_user; GRANT LOGMINING TO dms_user; GRANT READ ON DIRECTORY ONLINELOG_DIR TO dms_user; (for Binary Reader) GRANT READ ON DIRECTORY ARCHIVELOG_DIR TO dms_user; (for Binary Reader)

In addition, run the following:

exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$THREAD','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$PARAMETER','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$NLS_PARAMETERS','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TIMEZONE_NAMES','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TRANSACTION','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_REGISTRY','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('OBJ$','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_ENCRYPTED_COLUMNS','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS','dms_user','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR','dms_user','EXECUTE'); exec rdsadmin.rdsadmin_util.grant_sys_object('REGISTRY$SQLPATCH', 'MKTEST', 'SELECT'); (as of AWS DMS versions 3.3.1 and later) exec rdsadmin.rdsadmin_util.grant_sys_object('V_$STANDBY_LOG', 'MKTEST', 'SELECT'); (for RDS Active Dataguard Standby (ADG))

Configuring an Amazon-Managed Oracle Source for AWS DMS

Before using an Amazon-managed Oracle database as a source for AWS DMS, perform the tasks following for the Oracle database:

  • Set up supplemental logging.

  • Enable automatic backups.

  • Set up archiving. Archiving the redo logs for your Amazon RDS for Oracle DB instance allows AWS DMS to retrieve the log information using Oracle LogMiner or Binary Reader.

Each of the preceding steps is described in more detail following.

To set up supplemental logging

  1. Run the command following.

    exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD');
  2. (Optional) Change supplemental logging attributes as needed by using the following commands.

    exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','ALL'); exec rdsadmin.rdsadmin_util.alter_supplemental_logging('DROP','PRIMARY KEY');

To enable automatic backups

  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the Management Settings section for your Amazon RDS Oracle database instance, set the Enabled Automatic Backups option to Yes.

To set up archiving

  1. Run the rdsadmin.rdsadmin_util.set_configuration command to set up archiving.

    For example, to retain the archived redo logs for 24 hours, run the command following.

    exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24);
  2. Make sure that your storage has enough space for the archived redo logs during the specified retention period.

Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS

You can configure AWS DMS to use an Amazon RDS for Oracle instance as a source for CDC.

To use Oracle LogMiner, the minimum required user account privileges are sufficient. For more information, see User Account Privileges Required on an Amazon-Managed Oracle Source for AWS DMS.

To use AWS DMS Binary Reader, specify additional settings and extra connection attributes for the Oracle source endpoint, depending on your AWS DMS version. For Amazon RDS Oracle version 11g and 12.1 versions, see the post AWS DMS now supports Binary Reader for Amazon RDS for Oracle and Oracle Standby as a source on the AWS Database Blog.

For AWS DMS versions 3.3.1 and later, Binary Reader support is available in Amazon RDS Oracle versions 12.2 and 18c.

To configure CDC using AWS DMS Binary Reader

  1. Log in to your Amazon RDS Oracle source database as the master user.

  2. Run the stored procedures following to create the server-level directories. For more information, see Accessing Transaction Logs in the Amazon RDS User Guide.

    exec rdsadmin.rdsadmin_master_util.create_archivelog_dir; exec rdsadmin.rdsadmin_master_util.create_onlinelog_dir;
  3. Set the extra connection attributes following on the Amazon RDS Oracle source endpoint. There can be no white space following the semicolon separator (;) for multiple attribute settings.

    useLogminerReader=N;useBfile=Y
Note

There can be no white space following the semicolon separator (;) for multiple attribute settings (for example, oneSetting;thenAnother).

Limitations on Using Oracle as a Source for AWS DMS

The following limitations apply when using an Oracle database as a source for AWS DMS:

  • AWS DMS doesn't support long object names (over 30 bytes).

  • AWS DMS doesn't support function-based indexes.

  • If you use supplemental logging and carry out transformations on any of the columns, make sure that supplemental logging is activated for all fields and columns.

  • AWS DMS doesn't support multi-tenant container databases (CDB).

  • AWS DMS doesn't support deferred constraints.

  • AWS DMS supports the rename table table-name to new-table-name syntax for Oracle versions 11 and later.

  • AWS DMS doesn't replicate data changes that result from partition or subpartition operations (ADD, DROP, EXCHANGE, and TRUNCATE). Such updates might cause the errors following during replication:

    • For ADD operations, updates and deletes on the added data might raise a "0 rows affected" warning.

    • For DROP and TRUNCATE operations, new inserts might raise "duplicates" errors.

    • EXCHANGE operations might raise both a "0 rows affected" warning and "duplicates" errors.

    To replicate changes that result from partition or subpartition operations, reload the tables in question. After adding a new empty partition, operations on the newly added tables are replicated to the target as normal.

  • AWS DMS supports data changes on the target that result from running the CREATE TABLE AS statement on the source, except for Oracle supplied data types such as XMLType.

  • When you enable limited-size LOB mode, AWS DMS replicates LOBs from the Oracle source as NULL values on the target.

  • AWS DMS doesn't capture changes made by the Oracle DBMS_REDEFINITION package, for example the table metadata and the OBJECT_ID field.

  • AWS DMS maps empty BLOB and CLOB columns to NULL on the target.

  • When capturing changes with Oracle 11 LogMiner, an update on a CLOB column with a string length greater than 1982 is lost, and the target is not updated.

  • During change data capture (CDC), AWS DMS doesn't support batch updates to numeric columns defined as a primary key.

  • AWS DMS doesn't support certain UPDATE commands. The example following is an unsupported UPDATE command.

    UPDATE TEST_TABLE SET KEY=KEY+1;

    Here, TEST_TABLE is the table name and KEY is a numeric column defined as a primary key.

  • AWS DMS truncates any data in LONG or LONG RAW columns that is longer than 64 KB to 64 KB.

  • AWS DMS doesn't replicate tables whose names contain apostrophes.

  • AWS DMS doesn't support CDC from dynamic views.

  • When you use AWS DMS Binary Reader to access the redo logs, AWS DMS doesn't support CDC for index-organized tables with an overflow segment.

  • When you use Oracle LogMiner to access the redo logs, AWS DMS has the following limitations:

    • For Oracle 12 only, AWS DMS doesn't replicate any changes to LOB columns.

    • For all Oracle versions, AWS DMS doesn't replicate the result of UPDATE operations on XMLTYPE and LOB columns.

    • AWS DMS doesn't replicate results of the DDL statement ALTER TABLE ADD column data_type DEFAULT <default_value>. Instead of replicating <default_value> to the target, it sets the new column to NULL. Such a result can also happen even if the DDL statement that added the new column was run in a prior task.

      If the new column is nullable, Oracle updates all the table rows before logging the DDL itself. As a result, AWS DMS captures the changes but doesn't update the target. With the new column set to NULL, if the target table has no primary key or unique index subsequent updates raise a "zero rows affected" message.

  • AWS DMS doesn't support connections to a pluggable database (PDB) using Oracle LogMiner. To connect to a PDB, access the redo logs using Binary Reader.

  • When you use Binary Reader, AWS DMS has these limitations:

    • It doesn't support table clusters.

    • It supports only SHRINK SPACE operations at the table level. This level includes the full table, partitions, and subpartitions.

    • It doesn't support changes to index-organized tables with key compression.

  • AWS DMS doesn't support transparent data encryption (TDE) when using Binary Reader with an Amazon RDS Oracle source.

  • AWS DMS doesn't support connections to an Amazon RDS Oracle source using an Oracle Automatic Storage Management (ASM) proxy.

  • AWS DMS doesn't support virtual columns.

  • AWS DMS doesn't support the ROWID data type or materialized views based on a ROWID column.

  • AWS DMS doesn't load or capture global temporary tables.

Supported Encryption Methods for Using Oracle as a Source for AWS DMS

In the following table, you can find the transparent data encryption (TDE) methods that AWS DMS supports when working with an Oracle source database.

Note

AWS DMS currently doesn't support the use of Oracle TDE in an Oracle version 19c database.

Redo Logs Access Method TDE Tablespace TDE Column
Oracle LogMiner Yes Yes
Binary Reader Yes Yes (AWS DMS versions 3.x and later)

As of version 3.x, AWS DMS supports Oracle TDE (on both the column and the tablespace) when using Binary Reader. Share the relevant encryption keys with AWS DMS as part of the Password field value specified for the Oracle source endpoint. For more information, see the paragraphs following.

To specify the correct encryption key or keys used for TDE tablespace encryption or TDE column encryption, find the relevant entry or entries in Oracle Wallet that contain the encryption key or keys. After you find the relevant entry or entries, copy every entry and their values, then paste each entry into the Names field and each entry value into the Values field. Separate multiple entry names and values in the respective fields with commas.

To copy and paste multiple values, copy each entry into a text editor such as Notepad++ making sure to separate the values with a comma. Copy the string containing the comma-separated values from the text editor and paste it into the Values field. To copy and paste multiple entries, you don't need to use an intermediate text editor. Instead, you can paste the entries directly into the Names field, then separate each entry with a comma.

To get Oracle Wallet entries, try the following:

  • If the ENCRYPTION_WALLET_LOCATION parameter is defined in the sqlnet.ora file, use the wallet from the directory defined by this parameter.

  • If the WALLET_LOCATION parameter is defined in the sqlnet.ora file, use the wallet from the directory defined by this parameter.

  • In other cases, use the wallet in the default database location.

    Note

    In any case, the name of the wallet should be ewallet.p12.

  • Use the -list option in the Oracle mkstore utility to determine the ORACLE.SECURITY.DB/TS.ENCRYPTION.SUFFIX entry name or names, as follows.

    mkstore –wrl full-wallet-name -list
  • If you know what entry or entries are used to encrypt the redo logs, first select the entry name or names. Then use the -viewEntry option in the Oracle mkstore utility to determine each entry value, as follows.

    mkstore –wrl full-wallet-name -viewEntry entry-name
  • If you don't know what entry is used to encrypt the redo logs, you can select multiple DB or TS entries and determine their values as described preceding. Then copy and paste the entry names and values into the Names and Values fields as described previously. If the specified entries aren't correct, the task fails and the error message contains the correct entry name.

  • If the DBA changes the entry while the task is running, the task fails and the error message contains the new entry name. Add the new entry (name and value) to the already specified entries and then resume the task. For more information on adding a new entry to resume the task, see the procedure following.

To add a new entry to resume a task

  1. Get the Oracle Wallet user name and password for TDE used for the table that is being migrated:

    1. On the Oracle database, run the following query to return the OBJECT_ID for a given owner and table name.

      SELECT OBJECT_ID FROM ALL_OBJECTS WHERE OWNER='table-owner' AND OBJECT_NAME='table-name' AND OBJECT_TYPE='TABLE';
    2. Use the retrieved OBJECT_ID value in the following query to return the relevant master key.

      SELECT MKEYID FROM SYS.ENC$ WHERE OBJ#=OBJECT_ID;
    3. Select the key value from the Oracle Wallet as follows.

      mkstore –wrl full-wallet-name -viewEntry entry-name
  2. Copy the master key entry into the Names field and its value into the Values field.

  3. After you have retrieved these details, specify the master key name on the Oracle source endpoint by using the securityDbEncryptionName extra connection attribute. For example, you might set the master key name to the value following.

    securityDbEncryptionName=ORACLE.SECURITY.DB.ENCRYPTION.Adg8m2dhkU/0v/m5QUaaNJEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

    Provide the associated password for this master key in the Password field for the Oracle source endpoint. Use the order following to format the respective comma-separated password values. Here, TDE_Password is the password associated with the master key.

    Oracle_db_password,ASM_Password,TDE_Password

    Specify the password values in this order regardless of your Oracle database configuration. For example, if you're using TDE but your Oracle database isn't using ASM, specify the relevant password values in the comma-separated order following.

    Oracle_db_password,,TDE_Password

Supported Compression Methods for Using Oracle as a Source for AWS DMS

In the following table, you can find which compression methods AWS DMS supports when working with an Oracle source database. As the table shows, compression support depends both on your Oracle database version and whether DMS is configured to use Oracle LogMiner to access the redo logs.

Version Basic OLTP

HCC (from Oracle 11g R2)

Others
Oracle 10 No N/A N/A No
Oracle 11 – Oracle LogMiner Yes Yes Yes Yes – Any compression method supported by Oracle LogMiner.
Oracle 11 – Binary Reader Yes Yes Yes – See the note below. Yes
Note

When the Oracle source endpoint is configured to use Binary Reader, the Query Low level of the HCC compression method is supported for full-load tasks only.

Replicating Nested Tables Using Oracle as a Source for AWS DMS

As of version 3.3.1, AWS DMS supports the replication of Oracle tables containing columns that are nested tables or defined types. To enable this functionality, add the extra connection attribute setting following to the Oracle source endpoint.

allowSelectNestedTables=true;

AWS DMS creates the target tables from Oracle nested tables as regular parent and child tables on the target without a unique constraint. To access the correct data on the target, join the parent and child tables. To do this, first manually create a nonunique index on the NESTED_TABLE_ID column in the target child table. You can then use the NESTED_TABLE_ID column in the join ON clause together with the parent column that corresponds to the child table name. In addition, creating such an index improves performance when the target child table data is updated or deleted by AWS DMS. For an example, see Example Join for Parent and Child Tables on the Target.

We recommend that you configure the task to stop after a full load completes. Then, create these nonunique indexes for all the replicated child tables on the target and resume the task.

If a captured nested table is added to an existing parent table (captured or not captured), AWS DMS handles it correctly. However, the nonunique index for the corresponding target table isn't created. In this case, if the target child table becomes extremely large, performance might be affected. In such a case, we recommend that you stop the task, create the index, then resume the task.

After the nested tables are replicated to the target, have the DBA run a join on the parent and corresponding child tables to flatten the data.

Prerequisites for Replicating Oracle Nested Tables as a Source

Ensure that you replicate parent tables for all the replicated nested tables. Include both the parent tables (the tables containing the nested table column) and the child (that is, nested) tables in the AWS DMS table mappings.

Supported Oracle Nested Table Types as a Source

AWS DMS supports the following Oracle nested table types as a source:

  • Data type

  • User defined object

Limitations of AWS DMS Support for Oracle Nested Tables as a Source

AWS DMS has the following limitations in its support of Oracle nested tables as a source:

  • AWS DMS supports only one level of table nesting.

  • AWS DMS table mapping doesn’t check that both the parent and child table or tables are selected for replication. That is, it's possible to select a parent table without a child or a child table without a parent.

How AWS DMS Replicates Oracle Nested Tables as a Source

AWS DMS replicates parent and nested tables to the target as follows:

  • AWS DMS creates the parent table identical to the source. It then defines the nested column in the parent as RAW(16) and includes a reference to the parent's nested tables in its NESTED_TABLE_ID column.

  • AWS DMS creates the child table identical to the nested source, but with an additional column named NESTED_TABLE_ID. This column has the same type and value as the corresponding parent nested column and has the same meaning.

Example Join for Parent and Child Tables on the Target

To flatten the parent table, run a join between the parent and child tables, as shown in the following example:

  1. Create the Type table.

    CREATE OR REPLACE TYPE NESTED_TEST_T AS TABLE OF VARCHAR(50);
  2. Create the parent table with a column of type NESTED_TEST_T as defined preceding.

    CREATE TABLE NESTED_PARENT_TEST (ID NUMBER(10,0) PRIMARY KEY, NAME NESTED_TEST_T) NESTED TABLE NAME STORE AS NAME_KEY;
  3. Flatten the table NESTED_PARENT_TEST using a join with the NAME_KEY child table where CHILD.NESTED_TABLE_ID matches PARENT.NAME.

    SELECT … FROM NESTED_PARENT_TEST PARENT, NAME_KEY CHILD WHERE CHILD.NESTED_ TABLE_ID = PARENT.NAME;

Extra Connection Attributes When Using Oracle as a Source for AWS DMS

You can use extra connection attributes to configure your Oracle source. You specify these settings when you create the source endpoint. If you have multiple connection attribute settings, separate them from each other by semicolons with no additional white space (for example, oneSetting;thenAnother).

The following table shows the extra connection attributes that you can use to configure an Oracle database as a source for AWS DMS.

Name Description

addSupplementalLogging

Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.

Default value: N

Valid values: Y/N

Example: addSupplementalLogging=Y;

Note

If you use this option, you still need to enable database-level supplemental logging as discussed previously.

additionalArchivedLogDestId

Set this attribute with archivedLogDestId in a primary/standby setup. This attribute is useful in the case of a failover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after failover.

allowSelectNestedTables

Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types. For more information, see Replicating Nested Tables Using Oracle as a Source for AWS DMS.

Default value: false

Valid values: true/false

Example: allowSelectNestedTables=true;

useLogminerReader

Set this attribute to Y to capture change data using the LogMiner utility (the default). Set this option to N if you want AWS DMS to access the redo logs as a binary file. When you set this option to N, also add the setting useBfile=Y. For more information on this setting and using Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or AWS DMS Binary Reader for Change Data Capture (CDC).

Default value: Y

Valid values: Y/N

Example: useLogminerReader=N;useBfile=Y;

useBfile

Set this attribute to Y in order to capture change data using the Binary Reader utility. Set useLogminerReader to N to set this attribute to Y. To use the Binary Reader with an Amazon RDS for Oracle as the source, you set additional attributes. For more information on this setting and using Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or AWS DMS Binary Reader for Change Data Capture (CDC).

Default value: N

Valid values: Y/N

Example: useLogminerReader=N;useBfile=Y;

parallelASMReadThreads

Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: 2

Valid values: An integer from 2 to 8

Example: parallelASMReadThreads=6;readAheadBlocks=150000;

readAheadBlocks

Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum). Use this attribute together with the readAheadBlocks attribute. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: 1000

Valid values: An integer from 1000 to 200,000

Example: parallelASMReadThreads=6;readAheadBlocks=150000;

accessAlternateDirectly

Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: true

Valid values: true/false

Example: useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false;

useAlternateFolderForOnline Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: false

Valid values: true/false

Example: useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false; useAlternateFolderForOnline=true;

oraclePathPrefix Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: none

Valid value: /rdsdbdata/db/ORCL_A/

Example: useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false; useAlternateFolderForOnline=true;oraclePathPrefix=/rdsdbdata/db/ORCL_A/;

usePathPrefix Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: none

Valid value: /rdsdbdata/log/

Example: useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false; useAlternateFolderForOnline=true;oraclePathPrefix=/rdsdbdata/db/ORCL_A/; usePathPrefix=/rdsdbdata/log/;

replacePathPrefix Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs. For more information, see Configuring Change Data Capture (CDC) for an Amazon RDS for Oracle Source for AWS DMS.

Default value: false

Valid values: true/false

Example: useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false; useAlternateFolderForOnline=true;oraclePathPrefix=/rdsdbdata/db/ORCL_A/; usePathPrefix=/rdsdbdata/log/;replacePathPrefix=true;

retryInterval

Specifies the number of seconds that the system waits before resending a query.

Default value: 5

Valid values: Numbers starting from 1

Example: retryInterval=6;

archivedLogDestId

Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the $archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.

Default value:0

Valid values: Number

Example: archivedLogDestId=1;

archivedLogsOnly

When this field is set to Y, AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.

Default value: N

Valid values: Y/N

Example: archivedLogsOnly=Y;

numberDataTypeScale

Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.

Default value: 10

Valid values: -1 to 38 (-1 for FLOAT)

Example: numberDataTypeScale=12

afterConnectScript

Specifies a script to run immediately after AWS DMS connects to the endpoint.

Valid values: A SQL statement set off by a semicolon. Not all SQL statements are supported.

Example: afterConnectScript=ALTER SESSION SET CURRENT_SCHEMA = system;

failTasksOnLobTruncation

When set to true, this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize.

If a task is set to limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.

Default value: false

Valid values: Boolean

Example: failTasksOnLobTruncation=true;

readTableSpaceName

When set to true, this attribute supports tablespace replication.

Default value: false

Valid values: Boolean

Example: readTableSpaceName=true;

standbyDelayTime

Use this attribute to specify a time in minutes for the delay in standby sync.

With AWS DMS version 2.3.0 and later, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.

Default value:0

Valid values: Number

Example: standbyDelayTime=1;

securityDbEncryptionName

Specifies the name of a key used for the transparent data encryption (TDE) of the columns and tablespace in the Oracle source database. For more information on setting this attribute and its associated password on the Oracle source endpoint, see Supported Encryption Methods for Using Oracle as a Source for AWS DMS.

Default value: ""

Valid values: String

securityDbEncryptionName=ORACLE.SECURITY.DB.ENCRYPTION.Adg8m2dhkU/0v/m5QUaaNJEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

spatialSdo2GeoJsonFunctionName

For Oracle version 12.1 or earlier sources migrating to PostgreSQL targets, use this attribute to convert SDO_GEOMETRY to GEOJSON format.

By default, AWS DMS calls the SDO2GEOJSON custom function which must be present and accessible to the AWS DMS user. Alternatively, you can create your own custom function that mimics the operation of SDOGEOJSON and set spatialSdo2GeoJsonFunctionName to call it instead.

Default value: SDO2GEOJSON

Valid values: String

Example: spatialSdo2GeoJsonFunctionName=myCustomSDO2GEOJSONFunction;

Source Data Types for Oracle

The Oracle endpoint for AWS DMS supports most Oracle data types. The following table shows the Oracle source data types that are supported when using AWS DMS and the default mapping to AWS DMS data types.

For information on how to view the data type that is mapped in the target, see the section for the target endpoint you are using.

For additional information about AWS DMS data types, see Data Types for AWS Database Migration Service.

Oracle Data Type

AWS DMS Data Type

BINARY_FLOAT

REAL4

BINARY_DOUBLE

REAL8

BINARY

BYTES

FLOAT (P)

If precision is less than or equal to 24, use REAL4.

If precision is greater than 24, use REAL8.

NUMBER (P,S)

When scale is less than 0, use REAL8

NUMBER according to the "Expose number as" property in the Oracle source database settings.

When scale is 0:

  • And precision is 0, use REAL8.

  • And precision is greater than or equal to 2, use INT1.

  • And precision is greater than 2 and less than or equal to 4, use INT2.

  • And precision is greater than 4 and less than or equal to 9, use INT4.

  • And precision is greater than 9, use NUMERIC.

  • And precision is greater than or equal to scale, use NUMERIC.

In all other cases, use REAL8.

DATE

DATETIME

INTERVAL_YEAR TO MONTH

STRING (with interval year_to_month indication)

INTERVAL_DAY TO SECOND

STRING (with interval day_to_second indication)

TIME

DATETIME

TIMESTAMP

DATETIME

TIMESTAMP WITH TIME ZONE

STRING (with timestamp_with_timezone indication)

TIMESTAMP WITH LOCAL TIME ZONE

STRING (with timestamp_with_local_ timezone indication)

CHAR

STRING

VARCHAR2

STRING

NCHAR

WSTRING

NVARCHAR2

WSTRING

RAW

BYTES

REAL

REAL8

BLOB

BLOB

To use this data type with AWS DMS, you must enable the use of BLOB data types for a specific task. AWS DMS supports BLOB data types only in tables that include a primary key.

CLOB

CLOB

To use this data type with AWS DMS, you must enable the use of CLOB data types for a specific task. During change data capture (CDC), AWS DMS supports CLOB data types only in tables that include a primary key.

NCLOB

NCLOB

To use this data type with AWS DMS, you must enable the use of NCLOB data types for a specific task. During CDC, AWS DMS supports NCLOB data types only in tables that include a primary key.

LONG

CLOB

The LONG data type isn't supported in batch-optimized apply mode (TurboStream CDC mode). To use this data type with AWS DMS, you must enable the use of LOBs for a specific task. During CDC, AWS DMS supports LOB data types only in tables that have a primary key.

LONG RAW

BLOB

The LONG RAW data type isn't supported in batch-optimized apply mode (TurboStream CDC mode). To use this data type with AWS DMS, you must enable the use of LOBs for a specific task. During CDC, AWS DMS supports LOB data types only in tables that have a primary key.

XMLTYPE

CLOB

Support for the XMLTYPE data type requires the full Oracle Client (as opposed to the Oracle Instant Client). When the target column is a CLOB, both full LOB mode and limited LOB mode are supported (depending on the target).

SDO_GEOMETRY

BLOB (when an Oracle to Oracle migration)

CLOB (when an Oracle to PostgreSQL migration)

Oracle tables used as a source with columns of the following data types aren't supported and can't be replicated. Replicating columns with these data types result in a null column.

  • BFILE

  • ROWID

  • REF

  • UROWID

  • User-defined data types

  • ANYDATA

Note

Virtual columns aren't supported.

Migrating Oracle Spatial Data Types

Spatial data identifies the geometry information for an object or location in space. In an Oracle database, the geometric description of a spatial object is stored in an object of type SDO_GEOMETRY. Within this object, the geometric description is stored in a single row in a single column of a user-defined table.

AWS DMS supports migrating the Oracle type SDO_GEOMETRY from an Oracle source to either an Oracle or PostgreSQL target.

When you migrate Oracle spatial data types using AWS DMS, be aware of these considerations:

  • When migrating to an Oracle target, make sure to manually transfer USER_SDO_GEOM_METADATA entries that include type information.

  • When migrating from an Oracle source endpoint to a PostgreSQL target endpoint, AWS DMS creates target columns. These columns have default geometry and geography type information with a 2D dimension and a spatial reference identifier (SRID) equal to zero (0). An example is <GEOMETRY, 2, 0>.

  • For Oracle version 12.1 or earlier sources migrating to PostgreSQL targets, convert SDO_GEOMETRY objects to GEOJSON format by using the SDO2GEOJSON function, or the spatialSdo2GeoJsonFunctionName extra connection attribute. For more information, see Extra Connection Attributes When Using Oracle as a Source for AWS DMS.