Migrate an Oracle PeopleSoft database to AWS by using AWS DMS - AWS Prescriptive Guidance

Migrate an Oracle PeopleSoft database to AWS by using AWS DMS

Created by sampath kathirvel (AWS)

Environment: Production

Source: Oracle PeopleSoft

Target: Amazon RDS for Oracle

R Type: Replatform

Workload: Oracle

Technologies: Migration; Databases

AWS services: AWS DMS; Amazon RDS

Summary

Oracle PeopleSoft is an enterprise resource planning (ERP) solution for enterprise-wide processes. PeopleSoft has a three-tier architecture: client, application, and database. PeopleSoft can be run on Amazon Relational Database Service (Amazon RDS).

If you migrate your Oracle database to Amazon RDS, Amazon Web Services (AWS) can take care of backup tasks and high availability, leaving you free to concentrate on maintaining your PeopleSoft application and its functionality. For a comprehensive list of key factors to consider during the migration process, see Oracle database migration strategies in AWS Prescriptive Guidance.

This pattern provides a solution for migrating your on-premises Oracle databases to Amazon RDS for Oracle using Oracle Data Pump with AWS Database Migration Service (AWS DMS) and its change data capture (CDC) feature.

When migrating critical ERP applications such as Oracle PeopleSoft, minimizing the downtime is key. AWS DMS minimizes downtime by supporting both full load and continuous replication. from the source database to the target database. AWS DMS also provides real-time monitoring and logging of the migration, which can help you to identify and resolve any issues that could cause downtime.

When replicating changes with AWS DMS, you must specify a time or system change number (SCN) as the starting point for AWS DMS to read changes from the database logs. It's crucial to keep these logs accessible on the server for a designated amount of time to ensure that AWS DMS has access to these changes.

Prerequisites and limitations

Prerequisites

  • Provisioned Amazon RDS for Oracle database in your AWS Cloud environment as the target database.

  • An Oracle PeopleSoft database running on premises or on Amazon Elastic Compute Cloud (Amazon EC2) in the AWS Cloud.

    Note: This pattern is designed for migrating from on premises to AWS, but it was tested by using Oracle Database on an Amazon EC2 instance. For migrating from on premises, you will need to configure the appropriate network connectivity.

  • Schema details. When migrating an Oracle PeopleSoft application to Amazon RDS for Oracle, it is necessary to identify which Oracle database schema (for example, SYSADM) to migrate. Before starting the migration process, gather the following details about the schema:

    • Size

    • The number of objects per object type

    • The number of invalid objects.

    This information will aid in the migration process.

Limitations

  • This scenario has been tested only with the PeopleSoft DEMO database. It hasn’t been tested with a large dataset.

Architecture

The following diagram shows an instance running an Oracle database as the source database and an Amazon RDS for Oracle database as the target database. The data is exported and imported from the source Oracle database to the target Amazon RDS for Oracle database using Oracle Data Pump and replicated for CDC changes using AWS DMS.

Five-step process from on-premises DB instance to Amazon RDS.
  1. The initial step involves extracting data from the source database by using Oracle Data Pump, followed by sending it to the Amazon RDS for Oracle database target.

  2. Data is sent from the source database to a source endpoint in AWS DMS.

  3. From the source endpoint, the data is sent to the AWS DMS replication instance, where the replication task is performed.

  4. After the replication task is completed, the data is sent to the target endpoint in AWS DMS.

  5. From the target endpoint, the data is sent to the Amazon RDS for Oracle database instance.

Tools

AWS services

Other services

  • Oracle Data Pump helps you move data and metadata from one database to another at high speeds.

Best practices

Migrating LOBs

If your source database contains large binary objects (LOBs) that need to be migrated to the target database, AWS DMS provides the following options:

  • Full LOB mode – AWS DMS migrates all the LOBs from the source to the target database regardless of their size. Although the migration is slower, the advantage is that data isn’t truncated. For better performance, you can create a separate task on the new replication instance to migrate the tables that have LOBs larger than a few megabytes.

  • Limited LOB mode – You specify the maximum size of LOB column data, which allows AWS DMS to pre-allocate resources and apply the LOBs in bulk. If the size of the LOB columns exceeds the size that is specified in the task, AWS DMS truncates the data and sends warnings to the AWS DMS log file. You can improve performance by using Limited LOB mode if your LOB data size is within the Limited LOB size.

  • Inline LOB mode – You can migrate LOBs without truncating the data or slowing the performance of your task by replicating both small and large LOBs. First, specify a value for the InlineLobMaxSize parameter, which is available only when Full LOB mode is set to true. The AWS DMS task transfers the small LOBs inline, which is more efficient. Then, AWS DMS migrates the large LOBs by performing a lookup from the source table. However, Inline LOB mode works only during the full load phase.

Generating sequence values

Keep in mind that during the change data capture process with AWS DMS, incremental sequence numbers are not replicated from the source database. To avoid discrepancies in sequence values, you must generate the most recent sequence value from the source for all sequences, and apply it to the target Amazon RDS for Oracle database.

Credential management

To help secure your AWS resources, we recommend following the best practices for AWS Identity and Access Management (IAM).

Epics

TaskDescriptionSkills required

Download the template.

Download the DMS_instance.yaml AWS CloudFormation template to provision the AWS DMS replication instance and its source and target endpoints.

Cloud administrator, DBA

Start the stack creation.

  1. On the AWS Management Console, choose CloudFormation.

  2. Choose Create stack.

  3. For Specify template, choose Upload a template file.

  4. Choose Choose file.

  5. Choose the DMS_instance.yaml file.

  6. Choose Next.

Cloud administrator, DBA

Specify the parameters.

  1. For Stack name, enter your stack name.

  2. Under AWS DMS Instance Parameters, enter the following parameters:

    • DMSInstanceType – Choose the required instance for the AWS DMS replication instance, based on your business needs.

    • DMSStorageSize – Enter the storage size for the AWS DMS instance, based on the size of your migration.

  3. Under Source Oracle Database Configuration, enter the following parameters:

    • SourceOracleEndpointID – The source Oracle database server name

    • SourceOracleDatabaseName – The source database service name or session ID (SID) as applicable

    • SourceOracleUserName – The source database username (the default is system)

    • SourceOracleDBPassword – The source database username’s password

    • SourceOracleDBPort – The source database port

  4. Under Target RDS for Oracle Database Configuration, enter the following parameters:

    • TargetRDSOracleEndpointID – The target RDS database endpoint

    • TargetRDSOracleDatabaseName – The target RDS database name

    • TargetRSOracleUserName – The target RDS username

    • TargetRDSOracleDBPassword – The target RDS password

    • TargetOracleDBPort – The target RDS database port

  5. Under VPC, Subnet and Security Group Configuration, enter the following parameters:

    • VPCID – The VPC for the replication instance

    • VPCSecurityGroupId – The VPC security group for the replication instance

    • DMSSubnet1 – The subnet for Availability Zone 1

    • DMSSubnet2 – The subnet for Availability Zone 2

  6. Choose Next.

Cloud administrator, DBA

Create the stack.

  1. On the Configure stack options page, for Tags, enter any optional values.

  2. Choose Next.

  3. On the Review page, verify the details, and then choose Submit.

The provisioning should complete in approximately 5–10 minutes. It is complete when the AWS CloudFormation Stacks page shows CREATE_COMPLETE.

Cloud administrator, DBA

Set up the endpoints.

  1. From the AWS Management Console, choose Database Migration Services.

  2. Under Resource management, choose Replication instances.

  3. Under Resource management, choose Endpoints.

Cloud administrator, DBA

Test connectivity.

After the source and target endpoints shows status as Active, test the connectivity. Choose Run test for each endpoint (source and target) to make sure that the status shows as successful.

Cloud administrator, DBA
TaskDescriptionSkills required

Generate the SCN.

When the source database is active and in use by the application, initiate the data export with Oracle Data Pump. You must first generate a system change number (SCN) from the source database for both data consistency during the export with Oracle Data Pump and as a starting point for change data capture in AWS DMS.

To generate the current SCN from your source database, enter the following SQL statement.

SQL> select name from v$database; SQL> select name from v$database; NAME --------- PSFTDMO SQL> SELECT current_scn FROM v$database; CURRENT_SCN ----------- 23792008

Save the generated SCN to use when you export the data and for creating the AWS DMS replication task.

DBA

Create the parameter file.

To create a parameter file for exporting the schema, you can use the following code.

$ cat exp_datapmp.par userid=system/******* directory=DATA_PUMP_DIR logfile=export_dms_sample_user.log dumpfile=export_dms_sample_data_%U.dmp schemas=SYSADM flashback_scn=23792008

Note: You can also define your own DATA_PUMP_DIR by using the following commands, based on your requirements.

SQL> CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/opt/oracle/product/19c/dbhome_1/dmsdump/'; Directory created. SQL> GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO system; Grant succeeded. SQL> SQL> SELECT owner, directory_name, directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR'; OWNER DIRECTORY_NAME DIRECTORY_PATH ------------------------------------------------------------------------------------------------------------------ SYS DATA_PUMP_DIR /opt/oracle/product/19c/dbhome_1/dmsdump/
DBA

Export the schema.

To perform the export, use the expdp utility.

$ expdp parfile=exp_datapmp.par ....................... Transferring the dump file with DBMS_FILE_TRANSFER to Target: . . exported "SYSADM"."PS_XML_TEMPLT_LNG" 6.320 KB 0 rows . . exported "SYSADM"."PS_XML_TEMPLT_LNK" 6.328 KB 0 rows . . exported "SYSADM"."PS_XML_XLATDEF_LNG" 6.320 KB 0 rows . . exported "SYSADM"."PS_XML_XLATITM_LNG" 7.171 KB 0 rows . . exported "SYSADM"."PS_XPQRYRUNCNTL" 7.601 KB 0 rows . . exported "SYSADM"."PS_XPQRYRUNPARM" 7.210 KB 0 rows . . exported "SYSADM"."PS_YE_AMOUNTS" 9.351 KB 0 rows . . exported "SYSADM"."PS_YE_DATA" 16.58 KB 0 rows . . exported "SYSADM"."PS_YE_EE" 6.75 KB 0 rows . . exported "SYSADM"."PS_YE_W2CP_AMOUNTS" 9.414 KB 0 rows . . exported "SYSADM"."PS_YE_W2CP_DATA" 20.94 KB 0 rows . . exported "SYSADM"."PS_YE_W2C_AMOUNTS" 10.27 KB 0 rows . . exported "SYSADM"."PS_YE_W2C_DATA" 20.95 KB 0 rows . . exported "SYSADM"."PS_ZBD_JOBCODE_TBL" 14.60 KB 0 rows . . exported "SYSADM"."PTGRANTTBL" 5.468 KB 0 rows Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /opt/oracle/product/19c/dbhome_1/dmsdump/export_dms_sample_data_01.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Dec 19 20:13:57 2022 elapsed 0 00:38:22
DBA
TaskDescriptionSkills required

Transfer the dump file to the target instance.

To transfer your files using DBMS_FILE_TRANSFER, you need to create a database link from the source database to the Amazon RDS for Oracle instance. After the link is established, you can use the utility to transfer the Data Pump files directly to the RDS instance.

Alternatively, you can transfer the Data Pump files to Amazon Simple Storage Service (Amazon S3) and then import them into the Amazon RDS for Oracle instance. For more information about this option, see the Additional information section.

To create a database link ORARDSDB that connects to the Amazon RDS master user at the target DB instance, run the following commands on the source database.

$sqlplus / as sysdba $ SQL> create database link orardsdb connect to admin identified by "*****" using '(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = testpsft.*******.us-west-2.rds.amazonaws.com)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))'; Database link created.
DBA

Test the database link.

Test the database link to make sure that you can connect using sqlplus to the Amazon RDS for Oracle target database.

SQL> SQL> select name from v$database@orardsdb; NAME --------- ORCL SQL>
DBA

Transfer the dump file to the target database.

To copy the dump file over to Amazon RDS for Oracle database, you can either use the default DATA_PUMP_DIR directory or you can create your own directory using the following code.

exec rdsadmin.rdsadmin_util.create_directory(p_directory_name => ‘TARGET_PUMP_DIR’);

The following script copies a dump file named export_dms_sample_data_01.dmp from the source instance to a target Amazon RDS for Oracle database using the database link named orardsdb.

$ sqlplus / as sysdba SQL> BEGIN DBMS_FILE_TRANSFER.PUT_FILE( source_directory_object => 'DATA_PUMP_DIR', source_file_name => 'export_dms_sample_data_01.dmp', destination_directory_object => 'TARGET_PUMP_DIR’', destination_file_name => 'export_dms_sample_data_01.dmp', destination_database => 'orardsdb' ); END; / PL/SQL procedure successfully completed.
DBA

List the dump file in the target database.

After the PL/SQL procedure is completed, you can list the data dump file in the Amazon RDS for Oracle database by using the following code.

SQL> select * from table (rdsadmin.rds_file_util.listdir(p_directory => ‘TARGET_PUMP_DIR’));
DBA

Initiate the import on the target database.

Before you start the import process, set up the roles, schemas, and tablespaces on the target Amazon RDS for Oracle database by using the data dump file.

To perform the import, access the target database with the Amazon RDS master user account, and use the connection string name in the tnsnames.ora file, which includes the Amazon RDS for Oracle Database tns-entry. If necessary, you can include a remap option to import the data dump file into a different tablespace or under a different schema name.

To start the import, use the following code.

impdp admin@orardsdb directory=TARGET_PUMP_DIR logfile=import.log dumpfile=export_dms_sample_data_01.dmp

To ensure a successful import, check the import log file for any errors, and review details such as object count, row count, and invalid objects. If there are any invalid objects, recompile them. Additionally, compare the source and target database objects to confirm that they match.

DBA
TaskDescriptionSkills required

Create the replication task.

Create the AWS DMS replication task by using the following steps:

  1. On the AWS DMS console, under Conversion & migration, choose Database migration task.

  2. Under Task configuration, for Task identifier, enter your task identifier.

  3. For Replication instance, choose the DMS replication instance that you created.

  4. For Source database endpoint, choose your source endpoint.

  5. For Target database endpoint, choose your target Amazon RDS for Oracle database.

  6. For Migration type, choose Replicate data changes only. If you receive a message that supplemental logging needs to be turned on, follow the instructions in the Additional information section.

  7. Under Task settings, select Specify log sequence number.

  8. For System change number, enter the Oracle database SCN that you generated from the source Oracle database.

  9. Choose Enable validation.

  10. Choose Enable CloudWatch Logs.

    By activating this feature, you can validate the data and Amazon CloudWatch logs to review the AWS DMS replication instance logs.

  11. Under Selection rules, complete the following:

    • For Schema, choose Enter a schema.

    • For Schema name, enter SYSADM.

    • For Table name, enter %.

    • For Action, choose Include.

  12. Under Transformation rules, complete the following:

    • For Target, choose Table.

    • For Scheme name, choose Enter a schema.

    • For Schema name, enter SYSADM.

    • For Action, choose Rename to.

  13. Choose Create task.

After you create the task, it migrates the CDC to the Amazon RDS for Oracle database instance from the SCN that you provided under CDC start mode. You can also verify by reviewing the CloudWatch logs.

Cloud administrator, DBA
TaskDescriptionSkills required

Validate the data transfer.

After the AWS DMS task starts, you can check the Table statistics tab on the Tasks page to see the changes made to the data.

You can monitor the status of ongoing replication in the console on the Database migration tasks page.

For more information, see AWS DMS data validation.

Cloud administrator, DBA
TaskDescriptionSkills required

Stop replication.

Discontinue the replication procedure and halt the source application services.

Cloud administrator, DBA

Launch the PeopleSoft middle tier.

Launch the target PeopleSoft middle tier application in AWS, and direct it to the recently migrated Amazon RDS for Oracle database.

When you access the application, you should notice that all app connections are now established with the Amazon RDS for Oracle database.

DBA, PeopleSoft administrator

Turn off the source database.

After you confirm that there are no more connections to the source database, it can be turned off.

DBA

Related resources

Additional information

Transfer files using Amazon S3

To transfer the files to Amazon S3, you can use the AWS CLI or the Amazon S3 console. After you transfer the files to Amazon S3, you can use the Amazon RDS for Oracle instance to import the Data Pump files from Amazon S3.

If you choose to transfer the dump file using Amazon S3 integration as an alternate method, perform the follow steps:

  1. Create an S3 bucket.

  2. Export the data from the source database using Oracle Data Pump.

  3. Upload the Data Pump files to the S3 bucket.

  4. Download the Data Pump files from the S3 bucket to the target Amazon RDS for Oracle database.

  5. Perform the import using the Data Pump files.

Note: To transfer large data files between S3 and RDS instances, it is recommended to use the Amazon S3 Transfer Acceleration feature.

Activate supplemental logging

If you receive a warning message to enable supplemental logging in the source database for on-going replication, use the following steps.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS; SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS; SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;