Using an Oracle database as a source for AWS DMS
You can migrate data from one or many Oracle databases using AWS DMS. With an Oracle database as a source, you can migrate data to any of the targets supported by AWS DMS.
AWS DMS supports the following Oracle database editions:
-
Oracle Enterprise Edition
-
Oracle Standard Edition
-
Oracle Express Edition
-
Oracle Personal Edition
For information about versions of Oracle databases that AWS DMS supports as a source, see Sources for AWS DMS.
You can use Secure Sockets Layer (SSL) to encrypt connections between your Oracle endpoint and your replication instance. For more information on using SSL with an Oracle endpoint, see SSL support for an Oracle endpoint.
AWS DMS supports the use of Oracle transparent data encryption (TDE) to encrypt data at rest in the source database. For more information on using Oracle TDE with an Oracle source endpoint, see Supported encryption methods for using Oracle as a source for AWS DMS.
AWS supports the use of TLS version 1.2 and later with Oracle endpoints (and all other endpoint types), and recommends using TLS version 1.3 or later.
Follow these steps to configure an Oracle database as an AWS DMS source endpoint:
-
Create an Oracle user with the appropriate permissions for AWS DMS to access your Oracle source database.
-
Create an Oracle source endpoint that conforms with your chosen Oracle database configuration. To create a full-load-only task, no further configuration is needed.
-
To create a task that handles change data capture (a CDC-only or full-load and CDC task), choose Oracle LogMiner or AWS DMS Binary Reader to capture data changes. Choosing LogMiner or Binary Reader determines some of the later permissions and configuration options. For a comparison of LogMiner and Binary Reader, see the following section.
Note
For more information on full-load tasks, CDC-only tasks, and full-load and CDC tasks, see Creating a task
For additional details on working with Oracle source databases and AWS DMS, see the following sections.
Topics
- Using Oracle LogMiner or AWS DMS Binary Reader for CDC
- Workflows for configuring a self-managed or AWS-managed Oracle source database for AWS DMSConfiguring an Oracle source database
- Working with a self-managed Oracle database as a source for AWS DMS
- Working with an AWS-managed Oracle database as a source for AWS DMS
- Limitations on using Oracle as a source for AWS DMS
- SSL support for an Oracle endpoint
- Supported encryption methods for using Oracle as a source for AWS DMS
- Supported compression methods for using Oracle as a source for AWS DMS
- Replicating nested tables using Oracle as a source for AWS DMS
- Storing REDO on Oracle ASM when using Oracle as a source for AWS DMS
- Endpoint settings when using Oracle as a source for AWS DMS
- Source data types for Oracle
Using Oracle LogMiner or AWS DMS Binary Reader for CDC
In AWS DMS, there are two methods for reading the redo logs when doing change data capture (CDC) for Oracle as a source: Oracle LogMiner and AWS DMS Binary Reader. LogMiner is an Oracle API to read the online redo logs and archived redo log files. Binary Reader is an AWS DMS method that reads and parses the raw redo log files directly. These methods have the following features.
Feature | LogMiner | Binary Reader |
---|---|---|
Easy to configure | Yes | No |
Lower impact on source system I/O and CPU | No | Yes |
Better CDC performance | No | Yes |
Supports Oracle table clusters | Yes | No |
Supports all types of Oracle Hybrid Columnar Compression (HCC) | Yes |
Partially Binary Reader doesn't support QUERY LOW for tasks with CDC. All other HCC types are fully supported. |
LOB column support in Oracle 12c only | No (LOB Support is not available with LogMiner in Oracle 12c.) | Yes |
Supports UPDATE statements that affect only LOB
columns |
No | Yes |
Supports Oracle transparent data encryption (TDE) |
Partially When using Oracle LogMiner, AWS DMS doesn't support TDE encryption on column level for Amazon RDS for Oracle. |
Partially Binary Reader supports TDE only for self-managed Oracle databases. |
Supports all Oracle compression methods | Yes | No |
Supports XA transactions | No | Yes |
RAC |
Yes Not recommended, due to performance reasons, and some internal DMS limitations. |
Yes Highly recommended |
Note
By default, AWS DMS uses Oracle LogMiner for (CDC).
AWS DMS supports transparent data encryption (TDE) methods when working with an Oracle source database. If the TDE credentials you specify are incorrect, the AWS DMS migration task doesn't fail, which can impact ongoing replication of encrypted tables. For more information about specifying TDE credentials, see Supported encryption methods for using Oracle as a source for AWS DMS.
The main advantages of using LogMiner with AWS DMS include the following:
-
LogMiner supports most Oracle options, such as encryption options and compression options. Binary Reader doesn't support all Oracle options, particularly compression and most options for encryption.
-
LogMiner offers a simpler configuration, especially compared to Binary Reader direct-access setup or when the redo logs are managed using Oracle Automatic Storage Management (ASM).
-
LogMiner supports table clusters for use by AWS DMS. Binary Reader doesn't.
The main advantages of using Binary Reader with AWS DMS include the following:
-
For migrations with a high volume of changes, LogMiner might have some I/O or CPU impact on the computer hosting the Oracle source database. Binary Reader has less chance of having I/O or CPU impact because logs are mined directly rather than making multiple database queries.
-
For migrations with a high volume of changes, CDC performance is usually much better when using Binary Reader compared with using Oracle LogMiner.
-
Binary Reader supports CDC for LOBs in Oracle version 12c. LogMiner doesn't.
In general, use Oracle LogMiner for migrating your Oracle database unless you have one of the following situations:
-
You need to run several migration tasks on the source Oracle database.
-
The volume of changes or the redo log volume on the source Oracle database is high, or you have changes and are also using Oracle ASM.
Note
If you change between using Oracle LogMiner and AWS DMS Binary Reader, make sure to restart the CDC task.
Configuration for CDC on an Oracle source database
For an Oracle source endpoint to connect to the database for a change data capture (CDC) task, you might need to specify extra connection attributes. This can be true for either a full-load and CDC task or for a CDC-only task. The extra connection attributes that you specify depend on the method you use to access the redo logs: Oracle LogMiner or AWS DMS Binary Reader.
You specify extra connection attributes when you create a source endpoint. If
you have multiple connection attribute settings, separate them from each other
by semicolons with no additional white space (for example,
oneSetting;thenAnother
).
AWS DMS uses LogMiner by default. You don't have to specify additional extra connection attributes to use it.
To use Binary Reader to access the redo logs, add the following extra connection attributes.
useLogMinerReader=N;useBfile=Y;
Use the following format for the extra connection attributes to access a server that uses ASM with Binary Reader.
useLogMinerReader=N;useBfile=Y;asm_user=
asm_username
;asm_server=RAC_server_ip_address
:port_number
/+ASM;
Set the source endpoint Password
request parameter to both the
Oracle user password and the ASM password, separated by a comma as
follows.
oracle_user_password
,asm_user_password
Where the Oracle source uses ASM, you can work with high-performance options
in Binary Reader for transaction processing at scale. These options include
extra connection attributes to specify the number of parallel threads
(parallelASMReadThreads
) and the number of read-ahead buffers
(readAheadBlocks
). Setting these attributes together can
significantly improve the performance of the CDC task. The following settings
provide good results for most ASM configurations.
useLogMinerReader=N;useBfile=Y;asm_user=asm_username;asm_server=RAC_server_ip_address:port_number/+ASM; parallelASMReadThreads=6;readAheadBlocks=150000;
For more information on values that extra connection attributes support, see Endpoint settings when using Oracle as a source for AWS DMS.
In addition, the performance of a CDC task with an Oracle source that uses ASM depends on other settings that you choose. These settings include your AWS DMS extra connection attributes and the SQL settings to configure the Oracle source. For more information on extra connection attributes for an Oracle source using ASM, see Endpoint settings when using Oracle as a source for AWS DMS
You also need to choose an appropriate CDC start point. Typically when you do this, you want to identify the point of transaction processing that captures the earliest open transaction to begin CDC from. Otherwise, the CDC task can miss earlier open transactions. For an Oracle source database, you can choose a CDC native start point based on the Oracle system change number (SCN) to identify this earliest open transaction. For more information, see Performing replication starting from a CDC start point.
For more information on configuring CDC for a self-managed Oracle database as a source, see Account privileges required when using Oracle LogMiner to access the redo logs, Account privileges required when using AWS DMS Binary Reader to access the redo logs, and Additional account privileges required when using Binary Reader with Oracle ASM.
For more information on configuring CDC for an AWS-managed Oracle database as a source, see Configuring a CDC task to use Binary Reader with an RDS for Oracle source for AWS DMS and Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS.
Workflows for configuring a self-managed or AWS-managed Oracle source database for AWS DMS
Configuring an Oracle source database
To configure a self-managed source database instance, use the following workflow steps , depending on how you perform CDC.
Use the following workflow steps to configure an AWS-managed Oracle source database instance.
For this workflow step | If you perform CDC using LogMiner, do this | If you perform CDC using Binary Reader, do this |
---|---|---|
Grant Oracle account privileges. | For more information, see User account privileges required on an AWS-managed Oracle source for AWS DMS. | For more information, see User account privileges required on an AWS-managed Oracle source for AWS DMS. |
Prepare the source database for replication using CDC. | For more information, see Configuring an AWS-managed Oracle source for AWS DMS. | For more information, see Configuring an AWS-managed Oracle source for AWS DMS. |
Grant additional Oracle user privileges required for CDC. | No additional account privileges are required. | For more information, see Configuring a CDC task to use Binary Reader with an RDS for Oracle source for AWS DMS. |
If you haven't already done so, configure the task to use LogMiner or Binary Reader for CDC. | For more information, see Using Oracle LogMiner or AWS DMS Binary Reader for CDC. | For more information, see Using Oracle LogMiner or AWS DMS Binary Reader for CDC. |
Configure Oracle Standby as a source for CDC. | AWS DMS doesn't support Oracle Standby as a source. | For more information, see Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS. |
Working with a self-managed Oracle database as a source for AWS DMS
A self-managed database is a database that you configure and control, either a local on-premises database instance or a database on Amazon EC2. Following, you can find out about the privileges and configurations you need when using a self-managed Oracle database with AWS DMS.
User account privileges required on a self-managed Oracle source for AWS DMS
To use an Oracle database as a source in AWS DMS, grant the following privileges to the Oracle user specified in the Oracle endpoint connection settings.
Note
When granting privileges, use the actual name of objects, not the synonym
for each object. For example, use V_$OBJECT
including the
underscore, not V$OBJECT
without the underscore.
GRANT CREATE SESSION TO
db_user
; GRANT SELECT ANY TRANSACTION TOdb_user
; GRANT SELECT ON V_$ARCHIVED_LOG TOdb_user
; GRANT SELECT ON V_$LOG TOdb_user
; GRANT SELECT ON V_$LOGFILE TOdb_user
; GRANT SELECT ON V_$LOGMNR_LOGS TOdb_user
; GRANT SELECT ON V_$LOGMNR_CONTENTS TOdb_user
; GRANT SELECT ON V_$DATABASE TOdb_user
; GRANT SELECT ON V_$THREAD TOdb_user
; GRANT SELECT ON V_$PARAMETER TOdb_user
; GRANT SELECT ON V_$NLS_PARAMETERS TOdb_user
; GRANT SELECT ON V_$TIMEZONE_NAMES TOdb_user
; GRANT SELECT ON V_$TRANSACTION TOdb_user
; GRANT SELECT ON V_$CONTAINERS TOdb_user
; GRANT SELECT ON ALL_INDEXES TOdb_user
; GRANT SELECT ON ALL_OBJECTS TOdb_user
; GRANT SELECT ON ALL_TABLES TOdb_user
; GRANT SELECT ON ALL_USERS TOdb_user
; GRANT SELECT ON ALL_CATALOG TOdb_user
; GRANT SELECT ON ALL_CONSTRAINTS TOdb_user
; GRANT SELECT ON ALL_CONS_COLUMNS TOdb_user
; GRANT SELECT ON ALL_TAB_COLS TOdb_user
; GRANT SELECT ON ALL_IND_COLUMNS TOdb_user
; GRANT SELECT ON ALL_ENCRYPTED_COLUMNS TOdb_user
; GRANT SELECT ON ALL_LOG_GROUPS TOdb_user
; GRANT SELECT ON ALL_TAB_PARTITIONS TOdb_user
; GRANT SELECT ON SYS.DBA_REGISTRY TOdb_user
; GRANT SELECT ON SYS.OBJ$ TOdb_user
; GRANT SELECT ON DBA_TABLESPACES TOdb_user
; GRANT SELECT ON DBA_OBJECTS TOdb_user
; -– Required if the Oracle version is earlier than 11.2.0.3. GRANT SELECT ON SYS.ENC$ TOdb_user
; -– Required if transparent data encryption (TDE) is enabled. For more information on using Oracle TDE with AWS DMS, see Supported encryption methods for using Oracle as a source for AWS DMS. GRANT SELECT ON GV_$TRANSACTION TOdb_user
; -– Required if the source database is Oracle RAC in AWS DMS versions 3.4.6 and higher. GRANT SELECT ON V_$DATAGUARD_STATS TOdb_user
; -- Required if the source database is Oracle Data Guard and Oracle Standby is used in the latest release of DMS version 3.4.6, version 3.4.7, and higher. GRANT SELECT ON V$DATABASE_INCARNATION TOdb_user
;
Grant the additional following privilege for each replicated table when you are using a specific table list.
GRANT SELECT on
any-replicated-table
todb_user
;
Grant the additional following privilege to validate LOB columns with the validation feature.
GRANT EXECUTE ON SYS.DBMS_CRYPTO TO
db_user
;
Grant the additional following privilege if you use binary reader instead of LogMiner.
GRANT SELECT ON SYS.DBA_DIRECTORIES TO
db_user
;
Grant the additional following privilege to expose views.
GRANT SELECT on ALL_VIEWS to
dms_user
;
To expose views, you must also add the exposeViews=true
extra
connection attribute to your source endpoint.
Grant the additional following privilege when using serverless replications.
GRANT SELECT on dba_segments to
db_user
;
For information about serverless replications, see Working with AWS DMS Serverless.
Grant the additional following privileges when using Oracle-specific premigration assessments.
GRANT SELECT on gv_$parameter to
dms_user
; GRANT SELECT on v_$instance todms_user
; GRANT SELECT on v_$version todms_user
; GRANT SELECT on gv_$ASM_DISKGROUP todms_user
; GRANT SELECT on gv_$database todms_user
; GRANT SELECT on dba_db_links todms_user
; GRANT SELECT on gv_$log_History todms_user
; GRANT SELECT on gv_$log todms_user
; GRANT SELECT ON DBA_TYPES TOdb_user
; GRANT SELECT ON DBA_USERS to dms_user; GRANT SELECT ON DBA_DIRECTORIES to dms_user;
For information about Oracle-specific premigration assessments, see Oracle assessments.
Prerequisites for handling open transactions for Oracle Standby
When using AWS DMS versions 3.4.6 and higher, perform the following steps to handle open transactions for Oracle Standby.
-
Create a database link named,
AWSDMS_DBLINK
on the primary database.
will use the database link to connect to the primary database. Note that the database link is executed from the standby instance to query the open transactions running on the primary database. See the following example.DMS_USER
CREATE PUBLIC DATABASE LINK AWSDMS_DBLINK CONNECT TO
DMS_USER
IDENTIFIED BYDMS_USER_PASSWORD
USING '(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=PRIMARY_HOST_NAME_OR_IP
)(PORT=PORT
)) (CONNECT_DATA=(SERVICE_NAME=SID
)) )'; -
Verify the connection to the database link using
is established, as shown in the following example.DMS_USER
select 1 from dual@AWSDMS_DBLINK
Preparing an Oracle self-managed source database for CDC using AWS DMS
Prepare your self-managed Oracle database as a source to run a CDC task by doing the following:
Verifying that AWS DMS supports the source database version
Run a query like the following to verify that the current version of the Oracle source database is supported by AWS DMS.
SELECT name, value, description FROM v$parameter WHERE name = 'compatible';
Here, name
, value
, and description
are columns somewhere in the database that are being queried based on the
value of name
. If this query runs without error, AWS DMS supports
the current version of the database and you can continue with the migration.
If the query raises an error, AWS DMS doesn't support the current version of
the database. To proceed with migration, first convert the Oracle database
to an version supported by AWS DMS.
Making sure that ARCHIVELOG mode is on
You can run Oracle in two different modes: the ARCHIVELOG
mode and the NOARCHIVELOG
mode. To run a CDC task, run the
database in ARCHIVELOG
mode. To know if the database is in
ARCHIVELOG
mode, execute the following query.
SQL> SELECT log_mode FROM v$database;
If NOARCHIVELOG
mode is returned, set the database to ARCHIVELOG
per Oracle instructions.
Setting up supplemental logging
To capture ongoing changes, AWS DMS requires that you enable minimal supplemental logging on your Oracle source database. In addition, you need to enable supplemental logging on each replicated table in the database.
By default, AWS DMS adds PRIMARY KEY
supplemental logging on
all replicated tables. To allow AWS DMS to add PRIMARY KEY
supplemental logging, grant the following privilege for each replicated
table.
ALTER on
any-replicated-table
;
You can disable the default PRIMARY KEY
supplemental logging
added by AWS DMS using the extra connection attribute
addSupplementalLogging
. For more information, see Endpoint settings
when using Oracle as a source for AWS DMS.
Make sure to turn on supplemental logging if your replication task updates
a table using a WHERE
clause that doesn't reference a primary
key column.
To manually set up supplemental logging
-
Run the following query to verify if supplemental logging is already enabled for the database.
SELECT supplemental_log_data_min FROM v$database;
If the result returned is
YES
orIMPLICIT
, supplemental logging is enabled for the database.If not, enable supplemental logging for the database by running the following command.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-
Make sure that the required supplemental logging is added for each replicated table.
Consider the following:
-
If
ALL COLUMNS
supplemental logging is added to the table, you don't need to add more logging. -
If a primary key exists, add supplemental logging for the primary key. You can do this either by using the format to add supplemental logging on the primary key itself, or by adding supplemental logging on the primary key columns on the database.
ALTER TABLE Tablename ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
-
If no primary key exists and the table has a single unique index, add all of the unique index's columns to the supplemental log.
ALTER TABLE
TableName
ADD SUPPLEMENTAL LOG GROUPLogGroupName
(UniqueIndexColumn1
[,UniqueIndexColumn2
] ...) ALWAYS;Using
SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS
doesn't add the unique index columns to the log. -
If no primary key exists and the table has multiple unique indexes, AWS DMS selects the first unique index in an alphabetically ordered ascending list. You need to add supplemental logging on the selected index’s columns as in the previous item.
Using
SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS
doesn't add the unique index columns to the log. -
If no primary key exists and there is no unique index, add supplemental logging on all columns.
ALTER TABLE
TableName
ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;In some cases, the target table primary key or unique index is different than the source table primary key or unique index. In such cases, add supplemental logging manually on the source table columns that make up the target table primary key or unique index.
Also, if you change the target table primary key, add supplemental logging on the target unique index's columns instead of the columns of the source primary key or unique index.
-
If a filter or transformation is defined for a table, you might need to enable additional logging.
Consider the following:
-
If
ALL COLUMNS
supplemental logging is added to the table, you don't need to add more logging. -
If the table has a unique index or a primary key, add supplemental logging on each column that is involved in a filter or transformation. However, do so only if those columns are different from the primary key or unique index columns.
-
If a transformation includes only one column, don't add this column to a supplemental logging group. For example, for a transformation
A+B
, add supplemental logging on both columnsA
andB
. However, for a transformationsubstring(A,10)
don't add supplemental logging on columnA
. -
To set up supplemental logging on primary key or unique index columns and other columns that are filtered or transformed, you can set up
USER_LOG_GROUP
supplemental logging. Add this logging on both the primary key or unique index columns and any other specific columns that are filtered or transformed.For example, to replicate a table named
TEST.LOGGING
with primary keyID
and a filter by the columnNAME
, you can run a command similar to the following to create the log group supplemental logging.ALTER TABLE TEST.LOGGING ADD SUPPLEMENTAL LOG GROUP TEST_LOG_GROUP (ID, NAME) ALWAYS;
Account privileges required when using Oracle LogMiner to access the redo logs
To access the redo logs using the Oracle LogMiner, grant the following privileges to the Oracle user specified in the Oracle endpoint connection settings.
GRANT EXECUTE on DBMS_LOGMNR to db_user; GRANT SELECT on V_$LOGMNR_LOGS to db_user; GRANT SELECT on V_$LOGMNR_CONTENTS to db_user; GRANT LOGMINING to db_user; -– Required only if the Oracle version is 12c or higher.
Account privileges required when using AWS DMS Binary Reader to access the redo logs
To access the redo logs using the AWS DMS Binary Reader, grant the following privileges to the Oracle user specified in the Oracle endpoint connection settings.
GRANT SELECT on v_$transportable_platform to db_user; -– Grant this privilege if the redo logs are stored in Oracle Automatic Storage Management (ASM) and AWS DMS accesses them from ASM. GRANT CREATE ANY DIRECTORY to db_user; -– Grant this privilege to allow AWS DMS to use Oracle BFILE read file access in certain cases. This access is required when the replication instance doesn't have file-level access to the redo logs and the redo logs are on non-ASM storage. GRANT EXECUTE on DBMS_FILE_TRANSFER to db_user; -– Grant this privilege to copy the redo log files to a temporary folder using the CopyToTempFolder method. GRANT EXECUTE on DBMS_FILE_GROUP to db_user;
Binary Reader works with Oracle file features that include Oracle directories.
Each Oracle directory object includes the name of the folder containing the redo
log files to process. These Oracle directories aren't represented at the
file system level. Instead, they are logical directories that are created at the
Oracle database level. You can view them in the Oracle
ALL_DIRECTORIES
view.
If you want AWS DMS to create these Oracle directories, grant the CREATE
ANY DIRECTORY
privilege specified preceding. AWS DMS creates the
directory names with the DMS_
prefix. If you don't
grant the CREATE ANY DIRECTORY
privilege, create the corresponding
directories manually. In some cases when you create the Oracle directories
manually, the Oracle user specified in the Oracle source endpoint isn't the
user that created these directories. In these cases, also grant the READ
on DIRECTORY
privilege.
If the Oracle source endpoint is in Active Dataguard Standby (ADG), see the
How to use Binary Reader with ADG
Note
AWS DMS CDC doesn't support Active Dataguard Standby that is not configured to use automatic redo transport service.
In some cases, you might use Oracle Managed Files (OMF) for storing the logs.
Or your source endpoint is in ADG and thus you can't grant the CREATE ANY
DIRECTORY privilege. In these cases, manually create the directories with all
the possible log locations before starting the AWS DMS replication task. If AWS DMS
doesn't find a precreated directory that it expects, the task stops. Also,
AWS DMS doesn't delete the entries it has created in the
ALL_DIRECTORIES
view, so manually delete them.
Additional account privileges required when using Binary Reader with Oracle ASM
To access the redo logs in Automatic Storage Management (ASM) using Binary Reader, grant the following privileges to the Oracle user specified in the Oracle endpoint connection settings.
SELECT ON v_$transportable_platform SYSASM -– To access the ASM account with Oracle 11g Release 2 (version 11.2.0.2) and higher, grant the Oracle endpoint user the SYSASM privilege. For older supported Oracle versions, it's typically sufficient to grant the Oracle endpoint user the SYSDBA privilege.
You can validate ASM account access by opening a command prompt and invoking one of the following statements, depending on your Oracle version as specified preceding.
If you need the SYSDBA
privilege, use the following.
sqlplus
asmuser
/asmpassword
@+asmserver
as sysdba
If you need the SYSASM
privilege, use the following.
sqlplus
asmuser
/asmpassword
@+asmserver
as sysasm
Using a self-managed Oracle Standby as a source with Binary Reader for CDC in AWS DMS
To configure an Oracle Standby instance as a source when using Binary Reader for CDC, start with the following prerequisites:
-
AWS DMS currently supports only Oracle Active Data Guard Standby.
-
Make sure that the Oracle Data Guard configuration uses:
-
Redo transport services for automated transfers of redo data.
-
Apply services to automatically apply redo to the standby database.
-
To confirm those requirements are met, execute the following query.
SQL> select open_mode, database_role from v$database;
From the output of that query, confirm that the standby database is opened in READ ONLY mode and redo is being applied automatically. For example:
OPEN_MODE DATABASE_ROLE -------------------- ---------------- READ ONLY WITH APPLY PHYSICAL STANDBY
To configure an Oracle Standby instance as a source when using Binary Reader for CDC
-
Grant additional privileges required to access standby log files.
GRANT SELECT ON v_$standby_log TO
db_user
; -
Create a source endpoint for the Oracle Standby by using the AWS Management Console or AWS CLI. When creating the endpoint, specify the following extra connection attributes.
useLogminerReader=N;useBfile=Y;
Note
In AWS DMS, you can use extra connection attributes to specify if you want to migrate from the archive logs instead of the redo logs. For more information, see Endpoint settings when using Oracle as a source for AWS DMS.
-
Configure archived log destination.
DMS binary reader for Oracle source without ASM uses Oracle Directories to access archived redo logs. If your database is configured to use Fast Recovery Area (FRA) as an archive log destination, the location of archive redo files isn't constant. Each day that archived redo logs are generated results in a new directory being created in the FRA, using the directory name format YYYY_MM_DD. For example:
DB_RECOVERY_FILE_DEST
/SID
/archivelog/YYYY_MM_DD
When DMS needs access to archived redo files in the newly created FRA directory and the primary read-write database is being used as a source, DMS creates a new or replaces an existing Oracle directory, as follows.
CREATE OR REPLACE DIRECTORY
dmsrep_taskid
AS ‘DB_RECOVERY_FILE_DEST
/SID
/archivelog/YYYY_MM_DD
’;When the standby database is being used as a source, DMS is unable to create or replace the Oracle directory because the database is in read-only mode. But, you can choose to perform one of these additional steps:
-
Modify
log_archive_dest_id_1
to use an actual path instead of FRA in such a configuration that Oracle won't create daily subdirectories:ALTER SYSTEM SET log_archive_dest_1=’LOCATION=
full directory path
’Then, create an Oracle directory object to be used by DMS:
CREATE OR REPLACE DIRECTORY dms_archived_logs AS ‘
full directory path
’; -
Create an additional archive log destination and an Oracle directory object pointing to that destination. For example:
ALTER SYSTEM SET log_archive_dest_3=’LOCATION=
full directory path
’; CREATE DIRECTORY dms_archived_log AS ‘full directory path
’;Then add an extra connection attribute to the task source endpoint:
archivedLogDestId=3
-
Manually pre-create Oracle directory objects to be used by DMS.
CREATE DIRECTORY
dms_archived_log_20210301
AS ‘DB_RECOVERY_FILE_DEST/SID/archivelog/2021_03_01
’; CREATE DIRECTORYdms_archived_log_20210302
AS ‘DB_RECOVERY_FILE_DEST>/SID>/archivelog/2021_03_02
’; ... -
Create an Oracle scheduler job that runs daily and creates the required directory.
-
Using a user-managed database on Oracle Cloud Infrastructure (OCI) as a source for CDC in AWS DMS
A user-managed database is a database that you configure and control, such as an Oracle database created on a virtual machine (VM), bare metal, or Exadata server. Or, databases that you configure and control that run on dedicated infrastructure, like Oracle Cloud Infrastructure (OCI). The following information describes the privileges and configurations you need when using an Oracle user-managed database on OCI as a source for change data capture (CDC) in AWS DMS.
To configure an OCI hosted user-managed Oracle database as a source for change data capture
-
Grant required user account privileges for a user-managed Oracle source database on OCI. For more information, see Account privileges for a self-managed Oracle source endpoint.
-
Grant account privileges required when using Binary Reader to access the redo logs. For more information, see Account privileges required when using Binary Reader.
-
Add account privileges that are required when using Binary Reader with Oracle Automatic Storage Management (ASM). For more information, see Additional account privileges required when using Binary Reader with Oracle ASM.
-
Set-up supplemental logging. For more information, see Setting up supplemental logging.
-
Set-up TDE encryption. For more information, see Encryption methods when using an Oracle database as a source endpoint.
The following limitations apply when replicating data from an Oracle source database on Oracle Cloud Infrastructure (OCI).
Limitations
-
DMS doesn't support using Oracle LogMiner to access the redo logs.
-
DMS doesn't support Autonomous DB.
Working with an AWS-managed Oracle database as a source for AWS DMS
An AWS-managed database is a database that is on an Amazon service such as Amazon RDS, Amazon Aurora, or Amazon S3. Following, you can find the privileges and configurations that you need to set up when using an AWS-managed Oracle database with AWS DMS.
User account privileges required on an AWS-managed Oracle source for AWS DMS
Grant the following privileges to the Oracle user account specified in the Oracle source endpoint definition.
Important
For all parameter values such as
and
db_user
, Oracle
assumes the value is all uppercase unless you specify the value with a
case-sensitive identifier. For example, suppose that you create a
any-replicated-table
value without using
quotation marks, as in db_user
CREATE USER
or myuser
CREATE USER
MYUSER
. In this case, Oracle identifies and stores the value as
all uppercase (MYUSER
). If you use quotation marks, as in
CREATE USER "MyUser"
or CREATE USER 'MyUser'
,
Oracle identifies and stores the case-sensitive value that you specify
(MyUser
).
GRANT CREATE SESSION to
db_user
; GRANT SELECT ANY TRANSACTION todb_user
; GRANT SELECT on DBA_TABLESPACES todb_user
; GRANT SELECT ONany-replicated-table
todb_user
; GRANT EXECUTE on rdsadmin.rdsadmin_util todb_user
; -- For Oracle 12c or higher: GRANT LOGMINING to db_user; – Required only if the Oracle version is 12c or higher.
In addition, grant SELECT
and EXECUTE
permissions on
SYS
objects using the Amazon RDS procedure
rdsadmin.rdsadmin_util.grant_sys_object
as shown. For more
information, see Granting SELECT or EXECUTE privileges to SYS
objects.
exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_VIEWS', '
db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_TAB_PARTITIONS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_INDEXES', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_OBJECTS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_TABLES', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_USERS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_CATALOG', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_CONSTRAINTS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_CONS_COLUMNS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_TAB_COLS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_IND_COLUMNS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_LOG_GROUPS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$THREAD', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$PARAMETER', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$NLS_PARAMETERS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TIMEZONE_NAMES', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TRANSACTION', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$CONTAINERS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_REGISTRY', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('OBJ$', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_ENCRYPTED_COLUMNS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS', 'db_user
', 'SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS','db_user
','SELECT'); exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR', 'db_user
', 'EXECUTE'); -- (as of Oracle versions 12.1 and higher) exec rdsadmin.rdsadmin_util.grant_sys_object('REGISTRY$SQLPATCH', 'db_user
', 'SELECT'); -- (for Amazon RDS Active Dataguard Standby (ADG)) exec rdsadmin.rdsadmin_util.grant_sys_object('V_$STANDBY_LOG', 'db_user
', 'SELECT'); -- (for transparent data encryption (TDE)) exec rdsadmin.rdsadmin_util.grant_sys_object('ENC$', 'db_user
', 'SELECT'); -- (for validation with LOB columns) exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_CRYPTO', 'db_user
', 'EXECUTE'); -- (for binary reader) exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_DIRECTORIES','db_user
','SELECT'); -- Required when the source database is Oracle Data guard, and Oracle Standby is used in the latest release of DMS version 3.4.6, version 3.4.7, and higher. exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATAGUARD_STATS', 'db_user
', 'SELECT');
For more information on using Amazon RDS Active Dataguard Standby (ADG) with AWS DMS see Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS.
For more information on using Oracle TDE with AWS DMS, see Supported encryption methods for using Oracle as a source for AWS DMS.
Prerequisites for handling open transactions for Oracle Standby
When using AWS DMS versions 3.4.6 and higher, perform the following steps to handle open transactions for Oracle Standby.
-
Create a database link named,
AWSDMS_DBLINK
on the primary database.
will use the database link to connect to the primary database. Note that the database link is executed from the standby instance to query the open transactions running on the primary database. See the following example.DMS_USER
CREATE PUBLIC DATABASE LINK AWSDMS_DBLINK CONNECT TO
DMS_USER
IDENTIFIED BYDMS_USER_PASSWORD
USING '(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=PRIMARY_HOST_NAME_OR_IP
)(PORT=PORT
)) (CONNECT_DATA=(SERVICE_NAME=SID
)) )'; -
Verify the connection to the database link using
is established, as shown in the following example.DMS_USER
select 1 from dual@AWSDMS_DBLINK
Configuring an AWS-managed Oracle source for AWS DMS
Before using an AWS-managed Oracle database as a source for AWS DMS, perform the following tasks for the Oracle database:
-
Enable automatic backups. For more information about enabling automatic backups, see Enabling automated backups in the Amazon RDS User Guide.
-
Set up supplemental logging.
-
Set up archiving. Archiving the redo logs for your Amazon RDS for Oracle DB instance allows AWS DMS to retrieve the log information using Oracle LogMiner or Binary Reader.
To set up archiving
-
Run the
rdsadmin.rdsadmin_util.set_configuration
command to set up archiving.For example, to retain the archived redo logs for 24 hours, run the following command.
exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24); commit;
Note
The commit is required for a change to take effect.
-
Make sure that your storage has enough space for the archived redo logs during the specified retention period. For example, if your retention period is 24 hours, calculate the total size of your accumulated archived redo logs over a typical hour of transaction processing and multiply that total by 24. Compare this calculated 24-hour total with your available storage space and decide if you have enough storage space to handle a full 24 hours transaction processing.
To set up supplemental logging
-
Run the following command to enable supplemental logging at the database level.
exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD');
-
Run the following command to enable primary key supplemental logging.
exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','PRIMARY KEY');
-
(Optional) Enable key-level supplemental logging at the table level.
Your source database incurs a small bit of overhead when key-level supplemental logging is enabled. Therefore, if you are migrating only a subset of your tables, you might want to enable key-level supplemental logging at the table level. To enable key-level supplemental logging at the table level, run the following command.
alter table table_name add supplemental log data (PRIMARY KEY) columns;
Configuring a CDC task to use Binary Reader with an RDS for Oracle source for AWS DMS
You can configure AWS DMS to access the source Amazon RDS for Oracle instance redo logs using Binary Reader for CDC.
Note
To use Oracle LogMiner, the minimum required user account privileges are sufficient. For more information, see User account privileges required on an AWS-managed Oracle source for AWS DMS.
To use AWS DMS Binary Reader, specify additional settings and extra connection attributes for the Oracle source endpoint, depending on your AWS DMS version.
Binary Reader support is available in the following versions of Amazon RDS for Oracle:
-
Oracle 11.2 – Versions 11.2.0.4V11 and higher
-
Oracle 12.1 – Versions 12.1.0.2.V7 and higher
-
Oracle 12.2 – All versions
-
Oracle 18.0 – All versions
-
Oracle 19.0 – All versions
To configure CDC using Binary Reader
-
Log in to your Amazon RDS for Oracle source database as the master user and run the following stored procedures to create the server-level directories.
exec rdsadmin.rdsadmin_master_util.create_archivelog_dir; exec rdsadmin.rdsadmin_master_util.create_onlinelog_dir;
-
Grant the following privileges to the Oracle user account that is used to access the Oracle source endpoint.
GRANT READ ON DIRECTORY ONLINELOG_DIR TO
db_user
; GRANT READ ON DIRECTORY ARCHIVELOG_DIR TOdb_user
; -
Set the following extra connection attributes on the Amazon RDS Oracle source endpoint:
-
For RDS Oracle versions 11.2 and 12.1, set the following.
useLogminerReader=N;useBfile=Y;accessAlternateDirectly=false;useAlternateFolderForOnline=true; oraclePathPrefix=/rdsdbdata/db/{$DATABASE_NAME}_A/;usePathPrefix=/rdsdbdata/log/;replacePathPrefix=true;
-
For RDS Oracle versions 12.2, 18.0, and 19.0, set the following.
useLogminerReader=N;useBfile=Y;
-
Note
Make sure there's no white space following the semicolon separator (;) for
multiple attribute settings, for example
oneSetting;thenAnother
.
For more information configuring a CDC task, see Configuration for CDC on an Oracle source database.
Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS
Verify the following prerequisites for using Amazon RDS for Oracle Standby as a source when using Binary Reader for CDC in AWS DMS:
-
Use the Oracle master user to set up Binary Reader.
-
Make sure that AWS DMS currently supports using only Oracle Active Data Guard Standby.
After you do so, use the following procedure to use RDS for Oracle Standby as a source when using Binary Reader for CDC.
To configure an RDS for Oracle Standby as a source when using Binary Reader for CDC
-
Sign in to RDS for Oracle primary instance as the master user.
-
Run the following stored procedures as documented in the Amazon RDS User Guide to create the server level directories.
exec rdsadmin.rdsadmin_master_util.create_archivelog_dir; exec rdsadmin.rdsadmin_master_util.create_onlinelog_dir;
-
Identify the directories created in step 2.
SELECT directory_name, directory_path FROM all_directories WHERE directory_name LIKE ( 'ARCHIVELOG_DIR_%' ) OR directory_name LIKE ( 'ONLINELOG_DIR_%' )
For example, the preceding code displays a list of directories like the following.
-
Grant the
Read
privilege on the preceding directories to the Oracle user account that is used to access the Oracle Standby.GRANT READ ON DIRECTORY ARCHIVELOG_DIR_A TO
db_user
; GRANT READ ON DIRECTORY ARCHIVELOG_DIR_B TOdb_user
; GRANT READ ON DIRECTORY ONLINELOG_DIR_A TOdb_user
; GRANT READ ON DIRECTORY ONLINELOG_DIR_B TOdb_user
; -
Perform an archive log switch on the primary instance. Doing this makes sure that the changes to
ALL_DIRECTORIES
are also ported to the Oracle Standby. -
Run an
ALL_DIRECTORIES
query on the Oracle Standby to confirm that the changes were applied. -
Create a source endpoint for the Oracle Standby by using the AWS DMS Management Console or AWS Command Line Interface (AWS CLI). While creating the endpoint, specify the following extra connection attributes.
useLogminerReader=N;useBfile=Y;archivedLogDestId=1;additionalArchivedLogDestId=2
-
After creating the endpoint, use Test endpoint connection on the Create endpoint page of the console or the AWS CLI
test-connection
command to verify that connectivity is established.
Limitations on using Oracle as a source for AWS DMS
The following limitations apply when using an Oracle database as a source for AWS DMS:
-
AWS DMS supports Oracle Extended data types in AWS DMS version 3.5.0 and higher.
-
AWS DMS doesn't support long object names (over 30 bytes).
-
AWS DMS doesn't support function-based indexes.
-
If you manage supplemental logging and carry out transformations on any of the columns, make sure that supplemental logging is activated for all fields and columns. For more information on setting up supplemental logging, see the following topics:
-
For a self-managed Oracle source database, see Setting up supplemental logging.
-
For an AWS-managed Oracle source database, see Configuring an AWS-managed Oracle source for AWS DMS.
-
-
AWS DMS doesn't support the multi-tenant container root database (CDB$ROOT). It does support a PDB using the Binary Reader.
-
AWS DMS doesn't support deferred constraints.
-
In AWS DMS version 3.5.1 and higher, secure LOBs are supported only by performing a LOB lookup.
-
AWS DMS supports the
rename table table-name to new-table-name
syntax for all supported Oracle versions 11 and higher. This syntax isn't supported for any Oracle version 10 source databases. -
AWS DMS doesn't replicate results of the DDL statement
ALTER TABLE ADD
. Instead of replicatingcolumn
data_type
DEFAULTdefault_value
to the target, it sets the new column todefault_value
NULL
. -
When using AWS DMS version 3.4.7 or higher, to replicate changes that result from partition or subpartition operations, do the following before starting a DMS task.
-
Manually create the partitioned table structure (DDL);
-
Make sure the DDL is the same on both Oracle source and Oracle target;
-
Set the extra connection attribute
enableHomogenousPartitionOps=true
.
For more information about
enableHomogenousPartitionOps
, see Endpoint settings when using Oracle as a source for AWS DMS. Also, note that on FULL+CDC tasks, DMS doesn't replicate data changes captured as part of the cached changes. In that use case, recreate the table structure on the Oracle target and reload the tables in question.Prior to AWS DMS version 3.4.7:
DMS doesn't replicate data changes that result from partition or subpartition operations (
ADD
,DROP
,EXCHANGE
, andTRUNCATE
). Such updates might cause the following errors during replication:-
For
ADD
operations, updates and deletes on the added data might raise a "0 rows affected" warning. -
For
DROP
andTRUNCATE
operations, new inserts might raise "duplicates" errors. -
EXCHANGE
operations might raise both a "0 rows affected" warning and "duplicates" errors.
To replicate changes that result from partition or subpartition operations, reload the tables in question. After adding a new empty partition, operations on the newly added partition are replicated to the target as normal.
-
-
AWS DMS versions prior to 3.4 don't support data changes on the target that result from running the
CREATE TABLE AS
statement on the source. However, the new table is created on the target. -
AWS DMS doesn't capture changes made by the Oracle
DBMS_REDEFINITION
package, for example the table metadata and theOBJECT_ID
field. -
AWS DMS maps empty BLOB and CLOB columns to
NULL
on the target. -
When capturing changes with Oracle 11 LogMiner, an update on a CLOB column with a string length greater than 1982 is lost, and the target is not updated.
-
During change data capture (CDC), AWS DMS doesn't support batch updates to numeric columns defined as a primary key.
-
AWS DMS doesn't support certain
UPDATE
commands. The following example is an unsupportedUPDATE
command.UPDATE TEST_TABLE SET KEY=KEY+1;
Here,
TEST_TABLE
is the table name andKEY
is a numeric column defined as a primary key. -
AWS DMS doesn't support full LOB mode for loading LONG and LONG RAW columns. Instead, you can use limited LOB mode for migrating these datatypes to an Oracle target. In limited LOB mode, AWS DMS truncates any data to 64 KB that you set to LONG or LONG RAW columns longer than 64 KB.
-
AWS DMS doesn't support full LOB mode for loading XMLTYPE columns. Instead, you can use limited LOB mode for migrating XMLTYPE columns to an Oracle target. In limited LOB mode, DMS truncates any data larger than the user defined 'Maximum LOB size' variable. The maximum recommended value for 'Maximum LOB size' is 100MB.
-
AWS DMS doesn't replicate tables whose names contain apostrophes.
-
AWS DMS supports CDC from materialized views. But DMS doesn't support CDC from any other views.
-
AWS DMS doesn't support CDC for index-organized tables with an overflow segment.
-
AWS DMS doesn't support the
Drop Partition
operation for tables partitioned by reference withenableHomogenousPartitionOps
set totrue
. -
When you use Oracle LogMiner to access the redo logs, AWS DMS has the following limitations:
-
For Oracle 12 only, AWS DMS doesn't replicate any changes to LOB columns.
-
For all Oracle versions, AWS DMS doesn't replicate the result of
UPDATE
operations onXMLTYPE
and LOB columns. -
AWS DMS doesn't support XA transactions in replication while using Oracle LogMiner.
-
Oracle LogMiner doesn't support connections to a pluggable database (PDB). To connect to a PDB, access the redo logs using Binary Reader.
-
SHRINK SPACE operations aren’t supported.
-
-
When you use Binary Reader, AWS DMS has these limitations:
-
It doesn't support table clusters.
-
It supports only table-level
SHRINK SPACE
operations. This level includes the full table, partitions, and sub-partitions. -
It doesn't support changes to index-organized tables with key compression.
-
It doesn't support implementing online redo logs on raw devices.
-
Binary Reader supports TDE only for self-managed Oracle databases since RDS for Oracle doesn't support wallet password retrieval for TDE encryption keys.
-
-
AWS DMS doesn't support connections to an Amazon RDS Oracle source using an Oracle Automatic Storage Management (ASM) proxy.
-
AWS DMS doesn't support virtual columns.
-
AWS DMS doesn't support the
ROWID
data type or materialized views based on a ROWID column.AWS DMS has partial support for Oracle Materialized Views. For full-loads, DMS can do a full-load copy of an Oracle Materialized View. DMS copies the Materialized View as a base table to the target system and ignores any ROWID columns in the Materialized View. For ongoing replication (CDC), DMS tries to replicate changes to the Materialized View data but the results might not be ideal. Specifically, if the Materialized View is completely refreshed, DMS replicates individual deletes for all the rows, followed by individual inserts for all the rows. That is a very resource intensive exercise and might perform poorly for materialized views with large numbers of rows. For ongoing replication where the materialized views do a fast refresh, DMS tries to process and replicate the fast refresh data changes. In either case, DMS skips any ROWID columns in the materialized view.
-
AWS DMS doesn't load or capture global temporary tables.
-
For S3 targets using replication, enable supplemental logging on every column so source row updates can capture every column value. An example follows:
alter table yourtablename add supplemental log data (all) columns;
. -
An update for a row with a composite unique key that contains
null
can't be replicated at the target. -
AWS DMS doesn't support use of multiple Oracle TDE encryption keys on the same source endpoint. Each endpoint can have only one attribute for TDE encryption Key Name "
securityDbEncryptionName
", and one TDE password for this key. -
When replicating from Amazon RDS for Oracle, TDE is supported only with encrypted tablespace and using Oracle LogMiner.
-
AWS DMS does not support multiple table rename operations in quick succession.
-
When using Oracle 19.0 as source, AWS DMS doesn't support the following features:
-
Data-guard DML redirect
-
Partitioned hybrid tables
-
Schema-only Oracle accounts
-
-
AWS DMS doesn't support migration of tables or views of type
BIN$
orDR$
. -
Beginning with Oracle 18.x, AWS DMS doesn't support change data capture (CDC) from Oracle Express Edition (Oracle Database XE).
-
When migrating data from a CHAR column, DMS truncates any trailing spaces.
-
AWS DMS doesn't support replication from application containers.
-
AWS DMS doesn't support performing Oracle Flashback Database and restore points, as these operations affect the consistency of Oracle Redo Log files.
-
Direct-load
INSERT
procedure with the parallel execution option isn't supported in the following cases:-
Uncompressed tables with more than 255 columns
-
Row size exceeds 8K
-
Exadata HCC tables
-
Database running on Big Endian platform
-
-
A source table with neither primary nor unique key requires ALL COLUMN supplemental logging to be enabled. It creates more redo log activities and may increase DMS CDC latency.
-
AWS DMS doesn't migrate data from invisible columns in your source database. To include these columns in your migration scope, use the
ALTER TABLE
statement to make these columns visible.
SSL support for an Oracle endpoint
AWS DMS Oracle endpoints support SSL V3 for the none
and
verify-ca
SSL modes. To use SSL with an Oracle endpoint, upload the
Oracle wallet for the endpoint instead of .pem certificate files.
Using an existing certificate for Oracle SSL
To use an existing Oracle client installation to create the Oracle wallet file from the CA certificate file, do the following steps.
To use an existing oracle client installation for Oracle SSL with AWS DMS
-
Set the
ORACLE_HOME
system variable to the location of yourdbhome_1
directory by running the following command.prompt>export ORACLE_HOME=/home/user/app/user/product/12.1.0/dbhome_1
-
Append
$ORACLE_HOME/lib
to theLD_LIBRARY_PATH
system variable.prompt>export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
-
Create a directory for the Oracle wallet at
$ORACLE_HOME/ssl_wallet
.prompt>mkdir $ORACLE_HOME/ssl_wallet
-
Put the CA certificate
.pem
file in thessl_wallet
directory. If you use Amazon RDS, you can download therds-ca-2015-root.pem
root CA certificate file hosted by Amazon RDS. For more information about downloading this file, see Using SSL/TLS to encrypt a connection to a DB instance in the Amazon RDS User Guide. -
Run the following commands to create the Oracle wallet.
prompt>orapki wallet create -wallet $ORACLE_HOME/ssl_wallet -auto_login_only prompt>orapki wallet add -wallet $ORACLE_HOME/ssl_wallet -trusted_cert -cert $ORACLE_HOME/ssl_wallet/ca-cert.pem -auto_login_only
When you have completed the steps previous, you can import the wallet file
with the ImportCertificate
API call by specifying the
certificate-wallet parameter. You can then use the imported wallet certificate
when you select verify-ca
as the SSL mode when creating or
modifying your Oracle endpoint.
Note
Oracle wallets are binary files. AWS DMS accepts these files as-is.
Using a self-signed certificate for Oracle SSL
To use a self-signed certificate for Oracle SSL, do the steps following,
assuming an Oracle wallet password of oracle123
.
To use a self-signed certificate for Oracle SSL with AWS DMS
-
Create a directory you will use to work with the self-signed certificate.
mkdir -p /u01/app/oracle/self_signed_cert
-
Change into the directory you created in the previous step.
cd /u01/app/oracle/self_signed_cert
-
Create a root key.
openssl genrsa -out self-rootCA.key 2048
-
Self-sign a root certificate using the root key you created in the previous step.
openssl req -x509 -new -nodes -key self-rootCA.key -sha256 -days 3650 -out self-rootCA.pem
Use input parameters like the following.
-
Country Name (2 letter code) [XX]
, for example:AU
-
State or Province Name (full name) []
, for example:NSW
-
Locality Name (e.g., city) [Default City]
, for example:Sydney
-
Organization Name (e.g., company) [Default Company Ltd]
, for example:AmazonWebService
-
Organizational Unit Name (e.g., section) []
, for example:DBeng
-
Common Name (e.g., your name or your server's hostname) []
, for example:aws
-
Email Address []
, for example: abcd.efgh@amazonwebservice.com
-
-
Create an Oracle wallet directory for the Oracle database.
mkdir -p /u01/app/oracle/wallet
-
Create a new Oracle wallet.
orapki wallet create -wallet "/u01/app/oracle/wallet" -pwd oracle123 -auto_login_local
-
Add the root certificate to the Oracle wallet.
orapki wallet add -wallet "/u01/app/oracle/wallet" -pwd oracle123 -trusted_cert -cert /u01/app/oracle/self_signed_cert/self-rootCA.pem
-
List the contents of the Oracle wallet. The list should include the root certificate.
orapki wallet display -wallet /u01/app/oracle/wallet -pwd oracle123
For example, this might display similar to the following.
Requested Certificates: User Certificates: Trusted Certificates: Subject: CN=aws,OU=DBeng,O= AmazonWebService,L=Sydney,ST=NSW,C=AU
-
Generate the Certificate Signing Request (CSR) using the ORAPKI utility.
orapki wallet add -wallet "/u01/app/oracle/wallet" -pwd oracle123 -dn "CN=aws" -keysize 2048 -sign_alg sha256
-
Run the following command.
openssl pkcs12 -in /u01/app/oracle/wallet/ewallet.p12 -nodes -out /u01/app/oracle/wallet/nonoracle_wallet.pem
This has output like the following.
Enter Import Password: MAC verified OK Warning unsupported bag type: secretBag
-
Put 'dms' as the common name.
openssl req -new -key /u01/app/oracle/wallet/nonoracle_wallet.pem -out certdms.csr
Use input parameters like the following.
-
Country Name (2 letter code) [XX]
, for example:AU
-
State or Province Name (full name) []
, for example:NSW
-
Locality Name (e.g., city) [Default City]
, for example:Sydney
-
Organization Name (e.g., company) [Default Company Ltd]
, for example:AmazonWebService
-
Organizational Unit Name (e.g., section) []
, for example:aws
-
Common Name (e.g., your name or your server's hostname) []
, for example:aws
-
Email Address []
, for example: abcd.efgh@amazonwebservice.com
Make sure this is not same as step 4. You can do this, for example, by changing Organizational Unit Name to a different name as shown.
Enter the additional attributes following to be sent with your certificate request.
-
A challenge password []
, for example:oracle123
-
An optional company name []
, for example:aws
-
-
Get the certificate signature.
openssl req -noout -text -in certdms.csr | grep -i signature
The signature key for this post is
sha256WithRSAEncryption
. -
Run the command following to generate the certificate (
.crt
) file.openssl x509 -req -in certdms.csr -CA self-rootCA.pem -CAkey self-rootCA.key -CAcreateserial -out certdms.crt -days 365 -sha256
This displays output like the following.
Signature ok subject=/C=AU/ST=NSW/L=Sydney/O=awsweb/OU=DBeng/CN=aws Getting CA Private Key
-
Add the certificate to the wallet.
orapki wallet add -wallet /u01/app/oracle/wallet -pwd oracle123 -user_cert -cert certdms.crt
-
View the wallet. It should have two entries. See the code following.
orapki wallet display -wallet /u01/app/oracle/wallet -pwd oracle123
-
Configure the
sqlnet.ora
file ($ORACLE_HOME/network/admin/sqlnet.ora
).WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /u01/app/oracle/wallet/) ) ) SQLNET.AUTHENTICATION_SERVICES = (NONE) SSL_VERSION = 1.0 SSL_CLIENT_AUTHENTICATION = FALSE SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA)
-
Stop the Oracle listener.
lsnrctl stop
-
Add entries for SSL in the
listener.ora
file ($ORACLE_HOME/network/admin/listener.ora
).SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /u01/app/oracle/wallet/) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME =
SID
) (ORACLE_HOME =ORACLE_HOME
) (SID_NAME =SID
) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCPS)(HOST = localhost.localdomain)(PORT = 1522)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) -
Configure the
tnsnames.ora
file ($ORACLE_HOME/network/admin/tnsnames.ora
).<SID>= (DESCRIPTION= (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <SID>) ) ) <SID>_ssl= (DESCRIPTION= (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCPS)(HOST = localhost.localdomain)(PORT = 1522)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <SID>) ) )
-
Restart the Oracle listener.
lsnrctl start
-
Show the Oracle listener status.
lsnrctl status
-
Test the SSL connection to the database from localhost using sqlplus and the SSL tnsnames entry.
sqlplus -L
ORACLE_USER
@SID
_ssl -
Verify that you successfully connected using SSL.
SELECT SYS_CONTEXT('USERENV', 'network_protocol') FROM DUAL; SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') -------------------------------------------------------------------------------- tcps
-
Change directory to the directory with the self-signed certificate.
cd /u01/app/oracle/self_signed_cert
-
Create a new client Oracle wallet for AWS DMS to use.
orapki wallet create -wallet ./ -auto_login_only
-
Add the self-signed root certificate to the Oracle wallet.
orapki wallet add -wallet ./ -trusted_cert -cert self-rootCA.pem -auto_login_only
-
List the contents of the Oracle wallet for AWS DMS to use. The list should include the self-signed root certificate.
orapki wallet display -wallet ./
This has output like the following.
Trusted Certificates: Subject: CN=aws,OU=DBeng,O=AmazonWebService,L=Sydney,ST=NSW,C=AU
-
Upload the Oracle wallet that you just created to AWS DMS.
Supported encryption methods for using Oracle as a source for AWS DMS
In the following table, you can find the transparent data encryption (TDE) methods that AWS DMS supports when working with an Oracle source database.
Redo logs access method | TDE tablespace | TDE column |
---|---|---|
Oracle LogMiner | Yes | Yes |
Binary Reader | Yes | Yes |
AWS DMS supports Oracle TDE when using Binary Reader, on both the column level and the tablespace level. To use TDE encryption with AWS DMS, first identify the Oracle wallet location where the TDE encryption key and TDE password are stored. Then identify the correct TDE encryption key and password for your Oracle source endpoint.
To identify and specify encryption key and password for TDE encryption
-
Run the following query to find the Oracle encryption wallet on the Oracle database host.
SQL> SELECT WRL_PARAMETER FROM V$ENCRYPTION_WALLET; WRL_PARAMETER -------------------------------------------------------------------------------- /u01/oracle/product/12.2.0/dbhome_1/data/wallet/
Here,
/u01/oracle/product/12.2.0/dbhome_1/data/wallet/
is the wallet location. -
Get the master key ID using one of the following encryption options, depending on which one returns this value.
-
For table or column-level encryption, run the following queries.
SQL> SELECT OBJECT_ID FROM ALL_OBJECTS WHERE OWNER='DMS_USER' AND OBJECT_NAME='TEST_TDE_COLUMN' AND OBJECT_TYPE='TABLE'; OBJECT_ID --------------- 81046 SQL> SELECT MKEYID FROM SYS.ENC$ WHERE OBJ#=81046; MKEYID ------------ AWGDC9glSk8Xv+3bVveiVSgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Here,
AWGDC9glSk8Xv+3bVveiVSg
is the master key ID (MKEYID
). If you get a value forMKEYID
, you can continue with Step 3. Otherwise, continue with Step 2.2.Note
The trailing string
'A'
characters (AAA...
) is not part of the value. -
For tablespace-level encryption, run the following queries.
SQL> SELECT TABLESPACE_NAME, ENCRYPTED FROM dba_tablespaces; TABLESPACE_NAME ENC ------------------------------ --- SYSTEM NO SYSAUX NO UNDOTBS1 NO TEMP NO USERS NO TEST_ENCRYT YES SQL> SELECT name,utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64 FROM (SELECT t.name, RAWTOHEX(x.mkid) mkeyid FROM v$tablespace t, x$kcbtek x WHERE t.ts#=x.ts#) WHERE name = 'TEST_ENCRYT'; NAME MASTERKEYID_BASE64 ------------------------------ ---------------------------------- TEST_ENCRYT AWGDC9glSk8Xv+3bVveiVSg=
Here,
AWGDC9glSk8Xv+3bVveiVSg
is the master key ID (TEST_ENCRYT
). If both steps 2.1 and 2.2 return a value, they are always identical.The trailing
'='
character is not part of the value.
-
-
From the command line, list the encryption wallet entries on the source Oracle database host.
$ mkstore -wrl /u01/oracle/product/12.2.0/dbhome_1/data/wallet/ -list Oracle Secret Store entries: ORACLE.SECURITY.DB.ENCRYPTION.AWGDC9glSk8Xv+3bVveiVSgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ORACLE.SECURITY.DB.ENCRYPTION.AY1mRA8OXU9Qvzo3idU4OH4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY ORACLE.SECURITY.ID.ENCRYPTION. ORACLE.SECURITY.KB.ENCRYPTION. ORACLE.SECURITY.KM.ENCRYPTION.AY1mRA8OXU9Qvzo3idU4OH4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Find the entry containing the master key ID that you found in step 2 (
AWGDC9glSk8Xv+3bVveiVSg
). This entry is the TDE encryption key name. -
View the details of the entry that you found in the previous step.
$ mkstore -wrl /u01/oracle/product/12.2.0/dbhome_1/data/wallet/ -viewEntry ORACLE.SECURITY.DB.ENCRYPTION.AWGDC9glSk8Xv+3bVveiVSgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: ORACLE.SECURITY.DB.ENCRYPTION.AWGDC9glSk8Xv+3bVveiVSgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA = AEMAASAASGYs0phWHfNt9J5mEMkkegGFiD4LLfQszDojgDzbfoYDEACv0x3pJC+UGD/PdtE2jLIcBQcAeHgJChQGLA==
Enter the wallet password to see the result.
Here, the value to the right of
'='
is the TDE password. -
Specify the TDE encryption key name for the Oracle source endpoint by setting the
securityDbEncryptionName
extra connection attribute.securityDbEncryptionName=ORACLE.SECURITY.DB.ENCRYPTION.AWGDC9glSk8Xv+3bVveiVSgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
-
Provide the associated TDE password for this key on the console as part of the Oracle source's Password value. Use the following order to format the comma-separated password values, ended by the TDE password value.
Oracle_db_password
,ASM_Password
,AEMAASAASGYs0phWHfNt9J5mEMkkegGFiD4LLfQszDojgDzbfoYDEACv0x3pJC+UGD/PdtE2jLIcBQcAeHgJChQGLA==Specify the password values in this order regardless of your Oracle database configuration. For example, if you're using TDE but your Oracle database isn't using ASM, specify password values in the following comma-separated order.
Oracle_db_password
,,AEMAASAASGYs0phWHfNt9J5mEMkkegGFiD4LLfQszDojgDzbfoYDEACv0x3pJC+UGD/PdtE2jLIcBQcAeHgJChQGLA==
If the TDE credentials you specify are incorrect, the AWS DMS migration task doesn't fail. However, the task also doesn't read or apply ongoing replication changes to the target database. After starting the task, monitor Table statistics on the console migration task page to make sure changes are replicated.
If a DBA changes the TDE credential values for the Oracle database while the task is running, the task fails. The error message contains the new TDE encryption key name. To specify new values and restart the task, use the preceding procedure.
Important
You can’t manipulate a TDE wallet created in an Oracle Automatic Storage Management (ASM)
location because OS level commands like cp
, mv
, orapki
,
and mkstore
corrupt the wallet files stored in an ASM location. This restriction is
specific to TDE wallet files stored in an ASM location only, but not for TDE wallet files stored in a
local OS directory.
To manipulate a TDE wallet stored in ASM with OS level commands, create a local keystore and merge the ASM keystore into the local keystore as follows:
-
Create a local keystore.
ADMINISTER KEY MANAGEMENT create keystore
file system wallet location
identified bywallet password
; -
Merge the ASM keystore into the local keystore.
ADMINISTER KEY MANAGEMENT merge keystore
ASM wallet location
identified bywallet password
into existing keystorefile system wallet location
identified bywallet password
with backup;
Then, to list the encryption wallet entries and TDE password, run steps 3 and 4 against the local keystore.
Supported compression methods for using Oracle as a source for AWS DMS
In the following table, you can find which compression methods AWS DMS supports when working with an Oracle source database. As the table shows, compression support depends both on your Oracle database version and whether DMS is configured to use Oracle LogMiner to access the redo logs.
Version | Basic | OLTP |
HCC (from Oracle 11g R2 or newer) |
Others |
---|---|---|---|---|
Oracle 10 | No | N/A | N/A | No |
Oracle 11 or newer – Oracle LogMiner | Yes | Yes | Yes | Yes – Any compression method supported by Oracle LogMiner. |
Oracle 11 or newer – Binary Reader | Yes | Yes | Yes – For more information, see the following note . | Yes |
Note
When the Oracle source endpoint is configured to use Binary Reader, the Query Low level of the HCC compression method is supported for full-load tasks only.
Replicating nested tables using Oracle as a source for AWS DMS
AWS DMS supports the replication of Oracle tables containing columns that are nested tables or defined types. To enable this functionality, add the following extra connection attribute setting to the Oracle source endpoint.
allowSelectNestedTables=true;
AWS DMS creates the target tables from Oracle nested tables as regular parent and
child tables on the target without a unique constraint. To access the correct data
on the target, join the parent and child tables. To do this, first manually create a
nonunique index on the NESTED_TABLE_ID
column in the target child
table. You can then use the NESTED_TABLE_ID
column in the join
ON
clause together with the parent column that corresponds to the
child table name. In addition, creating such an index improves performance when the
target child table data is updated or deleted by AWS DMS. For an example, see Example join for
parent and child tables on the target.
We recommend that you configure the task to stop after a full load completes. Then, create these nonunique indexes for all the replicated child tables on the target and resume the task.
If a captured nested table is added to an existing parent table (captured or not captured), AWS DMS handles it correctly. However, the nonunique index for the corresponding target table isn't created. In this case, if the target child table becomes extremely large, performance might be affected. In such a case, we recommend that you stop the task, create the index, then resume the task.
After the nested tables are replicated to the target, have the DBA run a join on the parent and corresponding child tables to flatten the data.
Prerequisites for replicating Oracle nested tables as a source
Ensure that you replicate parent tables for all the replicated nested tables. Include both the parent tables (the tables containing the nested table column) and the child (that is, nested) tables in the AWS DMS table mappings.
Supported Oracle nested table types as a source
AWS DMS supports the following Oracle nested table types as a source:
-
Data type
-
User defined object
Limitations of AWS DMS support for Oracle nested tables as a source
AWS DMS has the following limitations in its support of Oracle nested tables as a source:
-
AWS DMS supports only one level of table nesting.
-
AWS DMS table mapping doesn't check that both the parent and child table or tables are selected for replication. That is, it's possible to select a parent table without a child or a child table without a parent.
How AWS DMS replicates Oracle nested tables as a source
AWS DMS replicates parent and nested tables to the target as follows:
-
AWS DMS creates the parent table identical to the source. It then defines the nested column in the parent as
RAW(16)
and includes a reference to the parent's nested tables in itsNESTED_TABLE_ID
column. -
AWS DMS creates the child table identical to the nested source, but with an additional column named
NESTED_TABLE_ID
. This column has the same type and value as the corresponding parent nested column and has the same meaning.
Example join for parent and child tables on the target
To flatten the parent table, run a join between the parent and child tables, as shown in the following example:
-
Create the
Type
table.CREATE OR REPLACE TYPE NESTED_TEST_T AS TABLE OF VARCHAR(50);
-
Create the parent table with a column of type
NESTED_TEST_T
as defined preceding.CREATE TABLE NESTED_PARENT_TEST (ID NUMBER(10,0) PRIMARY KEY, NAME NESTED_TEST_T) NESTED TABLE NAME STORE AS NAME_KEY;
-
Flatten the table
NESTED_PARENT_TEST
using a join with theNAME_KEY
child table whereCHILD.NESTED_TABLE_ID
matchesPARENT.NAME
.SELECT … FROM NESTED_PARENT_TEST PARENT, NAME_KEY CHILD WHERE CHILD.NESTED_ TABLE_ID = PARENT.NAME;
Storing REDO on Oracle ASM when using Oracle as a source for AWS DMS
For Oracle sources with high REDO generation, storing REDO on Oracle ASM can benefit performance, especially in a RAC configuration since you can configure DMS to distribute ASM REDO reads across all ASM nodes.
To utilize this configuration, use the asmServer
connection attribute. For example,
the following connection string distributes DMS REDO reads across 3 ASM nodes:
asmServer=(DESCRIPTION=(CONNECT_TIMEOUT=8)(ENABLE=BROKEN)(LOAD_BALANCE=ON)(FAILOVER=ON) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcp)(HOST=asm_node1_ip_address)(PORT=asm_node1_port_number)) (ADDRESS=(PROTOCOL=tcp)(HOST=asm_node2_ip_address)(PORT=asm_node2_port_number)) (ADDRESS=(PROTOCOL=tcp)(HOST=asm_node3_ip_address)(PORT=asm_node3_port_number))) (CONNECT_DATA=(SERVICE_NAME=+ASM)))
When using NFS to store Oracle REDO, it’s important to make sure that applicable DNFS (direct NFS)
client patches are applied, specifically any patch addressing Oracle bug 25224242. For additional information,
review the following Oracle Publication regarding Direct NFS client related patches,
Recommended Patches for Direct NFS Client
Additionally, to improve NFS read performance, we recommended you increase the value of
rsize
and wsize
in fstab
for the the NFS volume, as
shown in the following example.
NAS_name_here
:/ora_DATA1_archive /u09/oradata/DATA1 nfs rw,bg,hard,nointr,tcp,nfsvers=3,_netdev, timeo=600,rsize=262144,wsize=262144
Also, adjust the tcp-max-xfer-size
value as follows:
vserver nfs modify -vserver
vserver
-tcp-max-xfer-size 262144
Endpoint settings when using Oracle as a source for AWS DMS
You can use endpoint settings to configure your Oracle source database similar to using
extra connection attributes. You specify the settings when you create the source
endpoint using the AWS DMS console, or by using the create-endpoint
command in the
AWS CLI, with the
--oracle-settings '{"
JSON syntax.EndpointSetting"
:
"value"
, ...
}'
The following table shows the endpoint settings that you can use with Oracle as a source.
Name | Description |
---|---|
AccessAlternateDirectly |
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access. For more information, see Configuring a CDC task to use Binary Reader with an RDS for Oracle source for AWS DMS. Default value: true Valid values: true/false Example: |
|
Set this attribute with Although AWS DMS supports the use of the Oracle
Valid values : Archive destination Ids Example: |
|
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables one of the following on all tables selected for a migration task, depending on table metadata:
Default value: false Valid values: true/false Example: NoteIf you use this option, you still need to enable database-level supplemental logging as discussed previously. |
|
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types. For more information, see Replicating nested tables using Oracle as a source for AWS DMS. Default value: false Valid values: true/false Example: |
|
Specifies the ID of the destination for the archived redo
logs. This value should be the same as a number in the dest_id
column of the v$archived_log view. If you work with an
additional redo log destination, we recommend that you use the
Default value: 1 Valid values: Number Example: |
|
When this field is set to Y, AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges. Default value: N Valid values: Y/N Example: |
|
Use this extra connection attribute (ECA) when capturing source changes
with BinaryReader. This setting allows DMS to
buffer 50 reads at ASM level per single read thread while controlling
the number of threads using the Default value: false Valid values: true/false ECA Example: |
|
Set this attribute to Default value: false Valid values: true/false Example: |
|
Set this attribute to Note that this feature and enhancement was introduced in AWS DMS version 3.4.7. Default value: false Valid values: true/false Example: |
|
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target. Default value: false Valid values: true/false Example: |
|
Set this attribute to an escape character. This escape character allows you to make a single wildcard character behave as a normal character in table mapping expressions. For more information, see Wildcards in table mapping. Default value: Null Valid values: Any character other than a wildcard character Example: NoteYou can only use |
|
Use this attribute to pull data once from a view; you can't use it for ongoing replication. When you extract data from a view, the view is shown as a table on the target schema. Default value: false Valid values: true/false Example: |
|
Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the dest_id column in the v$archived_log view. Use this setting with the ArchivedLogDestId extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup. This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, AWS DMS needs information about what destination to get archive redo logs from to read changes. AWS DMS needs this because after the switchover the previous primary is a standby instance. Valid values: Archive destination Ids Example: |
|
When set to If a task is set to limited LOB mode and this option is set to
Default value: false Valid values: Boolean Example: |
|
Use this extra connection attribute (ECA) to allows DMS to ignore transactions from a specified user when replicating data from Oracle when using LogMiner. You can pass comma separated user name values, but they must be in all CAPITAL letters. ECA Example: |
|
Specifies the number scale. You can select a scale up to 38, or you can select -1 for FLOAT, or -2 for VARCHAR. By default, the NUMBER data type is converted to precision 38, scale 10. Default value: 10 Valid values: -2 to 38 (-2 for VARCHAR, -1 for FLOAT) Example: NoteSelect a precision-scale combination, -1 (FLOAT) or -2 (VARCHAR). DMS supports any precision-scale combination supported by Oracle. If precision is 39 or above, select -2 (VARCHAR). The NumberDataTypeScale setting for the Oracle database is used for the NUMBER data type only (without the explicit precision and scale definition). |
|
Provides the timeframe in minutes to check for any open transactions for CDC only task. NoteWhen you set Default value: 0 Valid values: An integer from 0 to 240 Example: |
OraclePathPrefix |
Set this string attribute to the required value in order to use
the Binary Reader to capture change data for an Amazon RDS for Oracle as
the source. This value specifies the default Oracle root used to
access the redo logs. For more information, see Configuring a CDC task
to use Binary Reader with an RDS for Oracle source for AWS DMS. Default value: none Valid value: /rdsdbdata/db/ORCL_A/ Example: |
ParallelASMReadThreads |
Set this attribute to change the number of threads that DMS
configures to perform change data capture (CDC) using Oracle
Automatic Storage Management (ASM). You can specify an integer
value between 2 (the default) and 8 (the maximum). Use this
attribute together with the Default value: 2 Valid values: An integer from 2 to 8 Example: |
ReadAheadBlocks |
Set this attribute to change the number of read-ahead blocks
that DMS configures to perform CDC using Oracle Automatic
Storage Management (ASM) and non-ASM NAS storage. You can specify an integer value
between 1000 (the default) and 200,000 (the maximum). Use this
attribute together with the Default value: 1000 Valid values: An integer from 1000 to 200,000 Example: |
|
When set to Default value: false Valid values: Boolean Example: |
ReplacePathPrefix |
Set this attribute to true in order to use the Binary Reader to
capture change data for an Amazon RDS for Oracle as the source. This
setting tells DMS instance to replace the default Oracle root with
the specified UsePathPrefix setting to access the redo
logs. For more information, see Configuring a CDC task
to use Binary Reader with an RDS for Oracle source for AWS DMS.Default value: false Valid values: true/false Example: |
|
Specifies the number of seconds that the system waits before resending a query. Default value: 5 Valid values: Numbers starting from 1 Example: |
|
Specifies the name of a key used for the transparent data encryption (TDE) of the columns and tablespace in the Oracle source database. For more information on setting this attribute and its associated password on the Oracle source endpoint, see Supported encryption methods for using Oracle as a source for AWS DMS. Default value: "" Valid values: String Example: |
|
For Oracle version 12.1 or earlier sources migrating to PostgreSQL targets, use this attribute to convert SDO_GEOMETRY to GEOJSON format. By default, AWS DMS calls the Default value: SDO2GEOJSON Valid values: String Example: |
|
Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases. In AWS DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production. Default value:0 Valid values: Number Example: Note: When using DMS 3.4.6,
3.4.7 and higher, use of this connection setting is optional. In the
latest version of DMS 3.4.6 and version 3.4.7, |
UseAlternateFolderForOnline |
Set this attribute to true in order to use the Binary Reader to
capture change data for an Amazon RDS for Oracle as the source. This
tells the DMS instance to use any specified prefix replacement to
access all online redo logs. For more information, see Configuring a CDC task
to use Binary Reader with an RDS for Oracle source for AWS DMS. Default value: false Valid values: true/false Example: |
UseBfile |
Set this attribute to Y in order to capture change data using
the Binary Reader utility. Set Note: When setting this value as an Extra Connection Attribute (ECA), valid values are 'Y' and 'N'. When
setting this value as an endpoint setting, valid values are Default value: N Valid values: Y/N (when setting this value as an ECA); true/false (when setting this value as as an endpoint setting). Example: |
|
Set this attribute to Y to capture change data using the LogMiner utility (the default). Set this option to N if you want AWS DMS to access the redo logs as a binary file. When you set this option to N, also add the setting useBfile=Y. For more information on this setting and using Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or AWS DMS Binary Reader for CDC. Note: When setting this value as an Extra Connection Attribute (ECA), valid values are 'Y' and 'N'. When
setting this value as an endpoint setting, valid values are Default value: Y Valid values: Y/N (when setting this value as an ECA); true/false (when setting this value as as an endpoint setting). Example: |
UsePathPrefix |
Set this string attribute to the required value in order to use
the Binary Reader to capture change data for an Amazon RDS for Oracle as
the source. This value specifies the path prefix used to replace the
default Oracle root to access the redo logs. For more information,
see Configuring a CDC task
to use Binary Reader with an RDS for Oracle source for AWS DMS. Default value: none Valid value: /rdsdbdata/log/ Example: |
Source data types for Oracle
The Oracle endpoint for AWS DMS supports most Oracle data types. The following table shows the Oracle source data types that are supported when using AWS DMS and the default mapping to AWS DMS data types.
Note
With the exception of the LONG and LONG RAW data types, when replicating from an Oracle source to an Oracle target (a homogeneous replication), all of the source and target data types will be identical. But the LONG data type will be mapped to CLOB and the LONG RAW data type will be mapped to BLOB.
For information on how to view the data type that is mapped in the target, see the section for the target endpoint you are using.
For additional information about AWS DMS data types, see Data types for AWS Database Migration Service.
Oracle data type |
AWS DMS data type |
---|---|
BINARY_FLOAT |
REAL4 |
BINARY_DOUBLE |
REAL8 |
BINARY |
BYTES |
FLOAT (P) |
If precision is less than or equal to 24, use REAL4. If precision is greater than 24, use REAL8. |
NUMBER (P,S) |
When scale is greater than 0, use NUMERIC. When scale is 0:
When scale is less than 0, use REAL8. |
DATE |
DATETIME |
INTERVAL_YEAR TO MONTH |
STRING (with interval year_to_month indication) |
INTERVAL_DAY TO SECOND |
STRING (with interval day_to_second indication) |
TIMESTAMP |
DATETIME |
TIMESTAMP WITH TIME ZONE |
STRING (with timestamp_with_timezone indication) |
TIMESTAMP WITH LOCAL TIME ZONE |
STRING (with timestamp_with_local_ timezone indication) |
CHAR |
STRING |
VARCHAR2 |
|
NCHAR |
WSTRING |
NVARCHAR2 |
|
RAW |
BYTES |
REAL |
REAL8 |
BLOB |
BLOB To use this data type with AWS DMS, you must enable the use of BLOB data types for a specific task. AWS DMS supports BLOB data types only in tables that include a primary key. |
CLOB |
CLOB To use this data type with AWS DMS, you must enable the use of CLOB data types for a specific task. During CDC, AWS DMS supports CLOB data types only in tables that include a primary key. |
NCLOB |
NCLOB To use this data type with AWS DMS, you must enable the use of NCLOB data types for a specific task. During CDC, AWS DMS supports NCLOB data types only in tables that include a primary key. |
LONG |
CLOB The LONG data type isn't supported in batch-optimized apply mode (TurboStream CDC mode). To use this data type with AWS DMS, enable the use of LOBs for a specific task. During CDC or full load, AWS DMS supports LOB data types only in tables that have a primary key. Also, AWS DMS doesn't support full LOB mode for loading LONG columns. Instead, you can use limited LOB mode for migrating LONG columns to an Oracle target. In limited LOB mode, AWS DMS truncates any data to 64 KB that you set to LONG columns longer than 64 KB. For more information about LOB support in AWS DMS, see Setting LOB support for source databases in an AWS DMS task |
LONG RAW |
BLOB The LONG RAW data type isn't supported in batch-optimized apply mode (TurboStream CDC mode). To use this data type with AWS DMS, enable the use of LOBs for a specific task. During CDC or full load, AWS DMS supports LOB data types only in tables that have a primary key. Also, AWS DMS doesn't support full LOB mode for loading LONG RAW columns. Instead, you can use limited LOB mode for migrating LONG RAW columns to an Oracle target. In limited LOB mode, AWS DMS truncates any data to 64 KB that you set to LONG RAW columns longer than 64 KB. For more information about LOB support in AWS DMS, see Setting LOB support for source databases in an AWS DMS task |
XMLTYPE |
CLOB |
SDO_GEOMETRY |
BLOB (when an Oracle to Oracle migration) CLOB (when an Oracle to PostgreSQL migration) |
Oracle tables used as a source with columns of the following data types aren't supported and can't be replicated. Replicating columns with these data types result in a null column.
-
BFILE
-
ROWID
-
REF
-
UROWID
-
User-defined data types
-
ANYDATA
-
VARRAY
Note
Virtual columns aren't supported.
Migrating Oracle spatial data types
Spatial data identifies the geometry information for an object or location in space. In an Oracle database, the geometric description of a spatial object is stored in an object of type SDO_GEOMETRY. Within this object, the geometric description is stored in a single row in a single column of a user-defined table.
AWS DMS supports migrating the Oracle type SDO_GEOMETRY from an Oracle source to either an Oracle or PostgreSQL target.
When you migrate Oracle spatial data types using AWS DMS, be aware of these considerations:
-
When migrating to an Oracle target, make sure to manually transfer USER_SDO_GEOM_METADATA entries that include type information.
-
When migrating from an Oracle source endpoint to a PostgreSQL target endpoint, AWS DMS creates target columns. These columns have default geometry and geography type information with a 2D dimension and a spatial reference identifier (SRID) equal to zero (0). An example is
GEOMETRY, 2, 0
. -
For Oracle version 12.1 or earlier sources migrating to PostgreSQL targets, convert
SDO_GEOMETRY
objects toGEOJSON
format by using theSDO2GEOJSON
function, or thespatialSdo2GeoJsonFunctionName
extra connection attribute. For more information, see Endpoint settings when using Oracle as a source for AWS DMS. -
AWS DMS supports Oracle Spatial Column migrations for Full LOB mode only. AWS DMS doesn't support Limited LOB or Inline LOB modes. For more information about LOB mode, see Setting LOB support for source databases in an AWS DMS task.
Because AWS DMS only supports Full LOB mode for migrating Oracle Spatial Columns, the columns' table needs a primary key and a unique key. If the table doesn't have a primary key and a unique key, the table is skipped from migration.