Menu
AWS Database Migration Service
Step-by-Step Migration Guide (API Version 2016-01-01)

Migrating an On-Premises Oracle Database to Amazon Aurora Using AWS Database Migration Service

Following, you can find a high-level outline and also a complete step-by-step walkthrough that both show the process for migrating an on-premises Oracle database (the source endpoint) to an Amazon Aurora (the target endpoint) using AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT).

AWS DMS migrates your data from your Oracle source into your Aurora target. AWS DMS also captures data manipulation language (DML) and data definition language (DDL) changes that happen on your source database and apply these changes to your target database. This way, AWS DMS helps keep your source and target databases in synch with each other. To facilitate the data migration, DMS creates tables and primary key indexes on the target database if necessary.

However, AWS DMS doesn't migrate your secondary indexes, sequences, default values, stored procedures, triggers, synonyms, views and other schema objects not specifically related to data migration. To migrate these objects to your Aurora target, use the AWS Schema Conversion Tool.

We highly recommend that you follow along using the Amazon sample database. To find a tutorial that uses the sample database and instructions on how to get a copy of the sample database, see Working with the Sample Database for Migration.

If you’ve used AWS DMS before or you prefer clicking a mouse to reading, you probably want to work with the high-level outline. If you need the details and want a more measured approach (or run into questions), you probably want the step-by-step guide.

Topic: Migration from On-Premises Oracle to Aurora or MySQL on Amazon RDS

Time:

Cost:

Source Database: Oracle

Target Database: Amazon Aurora/MySQL

Restrictions:

Oracle Edition: Enterprise, Standard, Express and Personal

Oracle Version: 10g (10.2 and later), 11g, 12c, (On Amazon Relational Database Service (Amazon RDS), 11g or higher is required.)

MySQL or Related Database Version: 5.5, 5.6, 5.7, MariaDB, Amazon Aurora

Character Set: utf8mb4 is not currently supported

Costs

Because AWS DMS isn't incorporated into the calculator yet, see the following table for a pricing estimate.

In addition to the setup on your own PC, you must create several AWS components to complete the migration process. The AWS components include:

AWS Service

Type

Description

Amazon RDS Aurora DB instance

db.r3.large

Single AZ, 10 GB storage, 1 million I/O

AWS DMS replication instance

T2.large

50 GB of storage for keeping replication logs included

AWS DMS data transfer

Free, based on the amount of data transferred for the sample database.

Data transfer out

First 1 GB per month free

Assuming, you run this tutorial for 2 hours, following is an estimate of pricing:

On this page:

  • Costs
  • Migration High-Level Outline To migrate your data from Oracle to Aurora using AWS DMS, you take the following steps. If you’ve used AWS DMS before or prefer clicking a mouse to reading, the following summary should help you kick-start your migration. To get the details about migration or if you run into questions, see the step-by-step guide.
  • Step 1: Prepare Your Oracle Source Database
  • Step 2: Launch and Prepare Your Aurora Target Database
  • Step 3: Launch a Replication Instance
  • Step 4: Create a Source Endpoint
  • Step 5: Create a Target Endpoint
  • Step 6: Create and Run a Migration Task
  • Migration Step-by-Step Guide Following, you can find step-by-step instructions for migrating an Oracle database from an on-premises environment to Amazon Aurora. These instructions assume that you have already done the setting up steps for using AWS DMS located at Setting Up to Use AWS Database Migration Service. Step 1: Configure Your Oracle Source Database To use Oracle as a source for AWS Database Migration Service (AWS DMS), you must first ensure that ARCHIVELOG MODE is on to provide information to LogMiner. AWS DMS uses LogMiner to read information from the archive logs so that AWS DMS can capture changes. For AWS DMS to read this information, make sure the archive logs are retained on the database server as long as AWS DMS requires them. If you configure your task to begin capturing changes immediately, you should only need to retain archive logs for a little longer than the duration of the longest running transaction. Retaining archive logs for 24 hours is usually sufficient. If you configure your task to begin from a point in time in the past, archive logs need to be available from that time forward. For more specific instructions for enabling ARCHIVELOG MODE and ensuring log retention for your on-premises Oracle database see the Oracle documentation. To capture change data, AWS DMS requires supplemental logging to be enabled on your source database for AWS DMS. Minimal supplemental logging must be enabled at the database level. AWS DMS also requires that identification key logging be enabled. This option causes the database to place all columns of a row's primary key in the redo log file whenever a row containing a primary key is updated (even if no value in the primary key has changed). You can set this option at the database or table level. If your Oracle source is in Amazon RDS, your database will be placed in ARCHIVELOG MODE if, and only if, you enable backups. The following command will ensure archive logs are retained on your RDS source for 24 hours:exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24); To configure your Oracle source database Run the following command to enable supplemental logging at the database level, which AWS DMS requires: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; For RDS: exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD'); Use the following command to enable identification key supplemental logging at the database level. AWS DMS requires supplemental key logging at the database level unless you allow AWS DMS to automatically add supplemental logging as needed or enable key-level supplemental logging at the table level: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; For RDS: exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','PRIMARY KEY'); Your source database incurs a small bit of overhead when key level supplemental logging is enabled. Therefore, if you are migrating only a subset of your tables, you might want to enable key level supplemental logging at the table level. To enable key level supplemental logging at the table level, use the following command. alter table table_name add supplemental log data (PRIMARY KEY) columns; If a table does not have a primary key you have two options. You can add supplemental logging to all columns involved in the first unique index on the table (sorted by index name.) You can add supplemental logging on all columns of the table. To add supplemental logging on a subset of columns in a table, that is those involved in a unique index, run the following command. ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP example_log_group (ID,NAME) ALWAYS; To add supplemental logging for all columns of a table, run the following command. alter table table_name add supplemental log data (ALL) columns; Create or configure a database account to be used by AWS DMS. We recommend that you use an account with the minimal privileges required by AWS DMS for your AWS DMS connection. AWS DMS requires the following privileges. CREATE SESSION SELECT ANY TRANSACTION SELECT on V_$ARCHIVED_LOG SELECT on V_$LOG SELECT on V_$LOGFILE SELECT on V_$DATABASE SELECT on V_$THREAD SELECT on V_$PARAMETER SELECT on V_$NLS_PARAMETERS SELECT on V_$TIMEZONE_NAMES SELECT on V_$TRANSACTION SELECT on ALL_INDEXES SELECT on ALL_OBJECTS SELECT on ALL_TABLES SELECT on ALL_USERS SELECT on ALL_CATALOG SELECT on ALL_CONSTRAINTS SELECT on ALL_CONS_COLUMNS SELECT on ALL_TAB_COLS SELECT on ALL_IND_COLUMNS SELECT on ALL_LOG_GROUPS SELECT on SYS.DBA_REGISTRY SELECT on SYS.OBJ$ SELECT on DBA_TABLESPACES SELECT on ALL_TAB_PARTITIONS SELECT on ALL_ENCRYPTED_COLUMNS * SELECT on all tables migrated If you want to capture and apply changes (CDC) you also need the following privileges. EXECUTE on DBMS_LOGMNR SELECT on V_$LOGMNR_LOGS SELECT on V_$LOGMNR_CONTENTS LOGMINING /* For Oracle 12c and higher. */ * ALTER for any table being replicated (if you want DMS to add supplemental logging) For Oracle versions before 11.2.0.3, you need the following privileges. If views are exposed, you need the following privileges. SELECT on DBA_OBJECTS /* versions before 11.2.0.3 */ SELECT on ALL_VIEWS (required if views are exposed) Step 2: Configure Your Aurora Target Database As with your source database, it’s a good idea to restrict access of the user you’re connecting with. You can also create a temporary user that you can remove after the migration. CREATE USER 'dms_user'@'%' IDENTIFIED BY 'dms_user'; GRANT ALTER, CREATE, DROP, INDEX, INSERT, UPDATE, DELETE, SELECT ON <target database(s)>.* TO 'dms_user'@'%'; AWS DMS uses some control tables on the target in the database awsdms_control. The following command ensures that your dms_user has the necessary access to the awsdms_control database: GRANT ALL PRIVILEGES ON awsdms_control.* TO 'dms_user'@'%'; flush privileges; Step 3: Creating a Replication Instance An AWS DMS replication instance performs the actual data migration between source and target. The replication instance also caches the changes during the migration. How much CPU and memory capacity a replication instance has influences the overall time required for the migration. Use the following procedure to set the parameters for a replication instance. To create an AWS DMS replication instance Sign in to the AWS Management Console, and open the AWS DMS console at https://console.aws.amazon.com/dms/ and choose Replication instances. If you are signed in as an AWS Identity and Access Management (IAM) user, you must have the appropriate permissions to access AWS DMS. For more information on the permissions required, see IAM Permissions Needed to Use AWS DMS. Choose Create replication instance. On the Create replication instance page, specify your replication instance information as shown following. For This Parameter Do This Name If you plan to launch multiple replication instances or share an account, choose a name that helps you quickly differentiate between the different replication instances. Description A good description gives others an idea of what the replication instance is being used for and can prevent accidents. Instance class AWS DMS can use a fair bit of memory and CPU. If you have a large database (many tables) or use a number of LOB data types, setting up a larger instance is probably better. As described following, you might be able to boost your throughput by running multiple tasks. Multple tasks consume more resources and require a larger instance. Keep an eye on CPU and memory consumption as you run your tests. If you find you are using the full capacity of the CPU or swap space, you can easily scale up. VPC Here you can choose the VPC where your replication instance will be launched. We recommend that, if possible, you select the same VPC where either your source or target database is (or both). AWS DMS needs to access your source and target database from within this VPC. If either or both of your database endpoints are outside of this VPC, modify your firewall rules to allow AWS DMS access. Multi-AZ If you choose Multi-AZ, AWS DMS launches a primary and secondary replication instance in separate Availability Zones. In the case of a catastrophic disk failure, the primary replication instance automatically fails over to the secondary, preventing an interruption in service. In most situations, if you are performing a migration, you won't need Multi-AZ. If your initial data load takes a long time and you need to keep the source and target databases in sync for a significant portion of time, you might consider running your migration server in a Muti-AZ configuration. Publicly accessible If either your source or your target database are outside of the VPC where your replication instance is, you need to make your replication instance publicly accessible. In the Advanced section, set the following parameters, and then choose Next. For This Option Do This Allocated storage (GB) Storage is primarily consumed by log files and cached transactions. For caches transactions, storage is used only when the cached transactions need to be written to disk. Therefore, AWS DMS doesn’t use a significant amount of storage.Some exceptions include the following: Very large tables that incur a significant transaction load. Loading a large table can take some time, so cached transactions are more likely to be written to disk during a large table load. Tasks that are configured to pause prior to loading cached transactions. In this case, all transactions are cached until the full load completes for all tables. With this configuration, a fair amount of storage might be consumed by cached transactions. Tasks configured with tables being loaded into Amazon Redshift. However, this configuration isn't an issue when Aurora is the target. In most cases, the default allocation of storage is sufficient. However, it’s always a good idea to pay attention to storage related metrics and scale up your storage if you find you are consuming more than the default allocation. Replication Subnet Group If you run in a Multi-AZ configuration, you need at least two subnet groups. Availability Zone If possible, locate your primary replication server in the same Availability Zone as your target database. VPC Security group(s) With security groups you can control ingress and egress to your VPC. With AWS DMS you can associate one or more security groups with the VPC where your replication server launches. KMS master key With AWS DMS, all data is encrypted at rest using a KMS encryption key. By default, AWS DMS creates a new encryption key for your replication server. However, you can use an existing key if desired. Step 4: Create Your Oracle Source Endpoint While your replication instance is being created, you can specify the Oracle source endpoint using the AWS Management Console. However, you can only test connectivity after the replication instance has been created, because the replication instance is used to test the connection. To specify source or target database endpoints using the AWS console In the AWS DMS console, choose Endpoints on the navigation pane. Choose Create endpoint. The Create database endpoint page appears, as shown following. Create source and target DB endpoints Specify your connection information for the source Oracle database. The following table describes the source settings. For This Parameter Do This Endpoint type Choose Source. Endpoint Identifier Type an identifier for your Oracle endpoint. The identifier for your endpoint must be unique within an AWS Region. Source Engine Choose oracle. Server name If your database is on-premises, type an IP address that AWS DMS can use to connect to your database from the replication server. If your database is running on Amazon Elastic Compute Cloud (Amazon EC2) or Amazon RDS, type the public Domain Name Service (DNS) address. Port Type the port which your database is listening for connections (the Oracle default is 1521). SSL mode Choose a Secure Sockets Layer (SSL) mode if you want to enable connection encryption for this endpoint. Depending on the mode you select, you might need to provide certificate and server certificate information. Username Type the AWS account user name. We recommend that you create an AWS account specific to your migration. Password Provide the password for the user name preceding. Choose the Advanced tab to set values for extra connection strings and the encryption key. For This Option Do This Extra connection attributes Here you can add values for extra attributes that control the behavior of your endpoint. A few of the most relevant attributes are listed here. For the full list, see the documentation. Separate multiple entries from each other by using a semi-colon (;). addSupplementalLogging: AWS DMS will automatically add supplemental logging if you enable this option (addSupplementalLogging=Y). useLogminerReader: By default AWS DMS uses Oracle LogMiner to capture change data from the logs. AWS DMS can also parse the logs using its proprietary technology. If you use Oracle 12c and need to capture changes to tables that include LOBS, set this to No (useLogminerReader=N). numberDataTypeScale: Oracle supports a NUMBER data type that has no precision or scale. By default, NUMBER is converted to a number with a precision of 38 and scale of 10, number(38,10). Valid values are 0—38 or -1 for FLOAT. archivedLogDestId: This option specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the $archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you specify a location identifier for archived redo logs. Doing so improves performance by ensuring that the correct logs are accessed from the outset. The default value for this option is 0. KMS master key Choose the encryption key to use to encrypt replication storage and connection information. If you choose (Default) aws/dms, the default AWS KMS key associated with your account and region is used. Before you save your endpoint, you can test it. To do so, select a VPC and replication instance from which to perform the test. As part of the test AWS DMS refreshes the list of schemas associated with the endpoint. (The schemas are presented as source options when creating a task using this source endpoint.) Step 5: Create Your Aurora Target Endpoint Next, you can provide information for the target Amazon Aurora database by specifying the target endpoint settings. The following table describes the target settings. To specify a target database endpoints using the AWS Management Console In the AWS DMS console, choose Endpoints on the navigation pane. Choose Create endpoint. The Create database endpoint page appears, as shown following. Create source and target DB endpoints Specify your connection information for the target Aurora database. The following table describes the target settings. For This Parameter Do This Endpoint type Choose Target. Endpoint Identifier Type an identifier for your Aurora endpoint. The identifier for your endpoint must be unique within an AWS Region. Target Engine Choose aurora. Servername Type the writer endpoint for your Aurora instance. The writer endpoint is the primary instance. Port Type the port assigned to the instance. SSL mode Choose an SSL mode if you want to enable connection encryption for this endpoint. Depending on the mode you select, you might need to provide certificate and server certificate information. Username Type the user name for the account you are using for the migration. We recommend that you create an account specific to your migration. Password Provide the password for the user name preceding. Choose the Advanced tab to set values for extra connection strings and the encryption key if you need them. For This Option Do This Extra connection attributes Here you can enter values for additional attributes that control the behavior of your endpoint. A few of the most relevant attributes are listed here.For the full list, see the documentation. Separate multiple entries from each other by using a semi-colon (;). targetDbType: By default, AWS DMS creates a different MySQL database for each schema being migrated. Sometimes you might want to combine objects from several schemas into a single database. To do so, set this option to specific_database (targetDbType=SPECIFIC_DATABASE). initstmt: You use this option to invoke the MySQL initstmt connection parameter and accept anything mysql initstmt accepts. When working with an Aurora target, it’s often useful to disable foreign key checks. To do so, use the initstmt parameter as follows: initstmt=SET FOREIGN_KEY_CHECKS=0 KMS master key Choose the encryption key to use to encrypt replication storage and connection information. If you choose (Default) aws/dms, the default AWS KMS key associated with your account and region is used. Prior to saving your endpoint, you have an opportunity to test it. To do so you’ll need to select a VPC and replication instance from which to perform the test. Step 6: Create a Migration Task When you create a migration task you tell AWS DMS exactly how you want your data migrated. Within a task you define which tables you’d like migrated, where you’d like them migrated, and how you’d like them migrated. If you’re planning to use the change capture and apply capability of AWS DMS it’s important to know transactions are maintained within a single task. In other words, you should migrate all tables that participate in a single transaction together in the same task. Using an AWS DMS task, you can specify what schema to migrate and the type of migration. You can migrate existing data, migrate existing data and replicate ongoing changes, or replicate data changes only. This walkthrough migrates existing data only. To create a migration task On the navigation pane, choose Tasks. Choose Create Task. On the Create Task page, specify the task options. The following table describes the settings. For This Option Do This Task name It’s always a good idea to give your task a descriptive name that helps organization. Task description Type a description for the task. Source endpoint Select your source endpoint. Target endpoint Select your target endpoint. Replication instance Select a replication instance on which to run the task. Remember, your source and target endpoints must be accessible from this instance. Migration type You can use three different migration types with AWS DMS. Migrate existing data: If you select this option, AWS DMS migrates only your existing data. Changes to your source data aren't captured and applied to your target. If you can afford taking an outage for the duration of the full load, migrating with this option is simple and straight forward. This method is also good to use when creating test copies of your database. Migrate existing data and replicate ongoing changes: With this option, AWS DMS captures changes while migrating your existing data. AWS DMS continues to capture and apply changes even after the bulk data has been loaded. Eventually the source and target databases will be in sync, allowing for a minimal downtime migration. To do this, take the following steps: Shut the application down Let the final change flow through to the target Perform any administrative tasks such as enabling foreign keys and triggers Start the application pointing to the new target database AWS DMS loads the bulk data table-by-table, <n> tables at a time. As the full load progresses, AWS DMS begins applying cached changes to the target tables as soon as possible. During the bulk load, referential integrity is violated, therefore existing foreign keys must be disabled for the full load. Once the full load is complete, your target database has integrity and changes are applied as transactions. Replicate data changes only: In some cases you might choose to load bulk data using a different method. This approach generally only applies to homogeneous migrations. Start task on create In most situations having the task start immediately is fine. Sometimes you might want to delay the start of a task, for instance, to change logging levels. Next, set the Advanced settings as shown following. For This Option Do This Target table preparation mode AWS DMS allows you to specify how you would like your target tables prepared prior to loading. Do nothing - When you select this option, AWS DMS does nothing to prepare your tables. Your table structure remains as is and any existing data is left in the table. You can use this method to consolidate data from multiple systems. Drop tables on target - Typically you use this option when you want AWS DMS to create your target table for you. When you select this option, AWS DMS drops and recreates the tables to migrate before migration. Truncate - Select this option if you want to pre-create some or all of the tables on your target system, maybe with the AWS Schema Conversion Tool. When you select this option, AWS DMS truncates a target table prior to loading it. If the target table doesn’t exist, AWS DMS creates the table for you. Include LOB columns in replication Large objects, (LOBs) can sometimes be difficult to migrate between systems. AWS DMS offers a number of options to help with the tuning of LOB columns. To see which and when datatypes are considered LOBS by AWS DMS, see the AWS DMS documentation. Don't include LOB columns - When you migrate data from one database to another, you might take the opportunity to rethink how your LOBs are stored, especially for heterogeneous migrations. If you want to do so, there’s no need to migrate the LOB data. Full LOB mode - In full LOB mode AWS DMS migrates all LOBs from source to target regardless of size. In this configuration, AWS DMS has no information about the maximum size of LOBs to expect. Thus, LOBs are migrated one at a time, piece by piece. Full LOB mode can be quite slow. Limited LOB mode - In limited LOB mode, you set a maximum size LOB that AWS DMS should accept. Doing so allows AWS DMS to pre-allocate memory and load the LOB data in bulk. LOBs that exceed the maximum LOB size are truncated and a warning is issued to the log file. In limited LOB mode you get significant performance gains over full LOB mode. We recommend that you use limited LOB mode whenever possible. With Oracle, LOBs are treated as VARCHAR data types whenever possible. This approach means AWS DMS fetches them from the database in bulk, which is significantly faster than other methods. The maximum size of a VARCHAR in Oracle is 64K, therefore a limited LOB size of less than 64K is optimal when Oracle is your source database. Max LOB size (K) When a task is configured to run in limited LOB mode, this option determines the maximum size LOB that AWS DMS accepts. Any LOBs that are larger than this value will be truncated to this value. LOB chunk size (K) When a task is configured to use full LOB mode, AWS DMS retrieves LOBs in pieces. This option determines the size of each piece. When setting this option, pay particular attention to the maximum packet size allowed by your network configuration. If the LOB chunk size exceeds your maximum allowed packet size, you might see disconnect errors. Custom CDC start time This parameter pertains to tasks configured to replicate data changes only. It tells AWS DMS where to start looking for changes in the change stream. Enable logging Always enable logging. Set additional parameters. For This Option Do This Create control table(s) in target schema AWS DMS requires some control tables in the target database. By default those tables are created in the same database as your data. This parameter allows you to tell AWS DMS to puts those artifacts somewhere else. Maximum number of tables to load in parallel AWS DMS performs a table-by-table load of your data. This parameter allows you to control how many tables AWS DMS will load in parallel. The default is 8, which is optimal in most situations. Specify any table mapping settings. Table mappings tell AWS DMS which tables a task should migrate from source to target. Table mappings are expressed in JSON, though some settings can be made using the AWS Management Console. Table mappings can also include transformations such as changing table names from upper case to lower case. AWS DMS generates default table mappings for each (non-system) schema in the source database. In most cases you’ll want to customize your table mapping. To customize your table mapping select the custom radio button. For details on creating table mappings see the AWS DMS documentation. The following table mapping does these things: It includes the DMS_SAMPLE schema in the migration. It excludes the tables NFL_DATA, MLB_DATA, NAME_DATE, and STADIUM_DATA. It converts the schema, table, and column names to lower case. { "rules": [ { "rule-type": "selection", "rule-id": "1", "rule-name": "1", "object-locator": { "schema-name": "DMS_SAMPLE", "table-name": "%" }, "rule-action": "include" }, { "rule-type": "selection", "rule-id": "2", "rule-name": "2", "object-locator": { "schema-name": "DMS_SAMPLE", "table-name": "MLB_DATA" }, "rule-action": "exclude" }, { "rule-type": "selection", "rule-id": "3", "rule-name": "3", "object-locator": { "schema-name": "DMS_SAMPLE", "table-name": "NAME_DATA" }, "rule-action": "exclude" }, { "rule-type": "selection", "rule-id": "4", "rule-name": "4", "object-locator": { "schema-name": "DMS_SAMPLE", "table-name": "NFL_DATA" }, "rule-action": "exclude" }, { "rule-type": "selection", "rule-id": "5", "rule-name": "5", "object-locator": { "schema-name": "DMS_SAMPLE", "table-name": "NFL_STADIUM_DATA" }, "rule-action": "exclude" },{ "rule-type": "transformation", "rule-id": "6", "rule-name": "6", "rule-action": "convert-lowercase", "rule-target": "schema", "object-locator": { "schema-name": "%" } }, { "rule-type": "transformation", "rule-id": "7", "rule-name": "7", "rule-action": "convert-lowercase", "rule-target": "table", "object-locator": { "schema-name": "%", "table-name": "%" } }, { "rule-type": "transformation", "rule-id": "8", "rule-name": "8", "rule-action": "convert-lowercase", "rule-target": "column", "object-locator": { "schema-name": "%", "table-name": "%", "column-name": "%" } } ] } Step 7: Monitor Your Migration Task Three sections in the console provide visibility into what your migration task is doing: Task monitoring – The Task Monitoring tab provides insight into your full load throughput and also your change capture and apply latencies. Table statistics – The Table Statistics tab provides detailed information on the number of rows processed, type and number of transactions processed, and also information on DDL operations. Logs – From the Logs tab you can view your task’s log file, (assuming you turned logging on.) If for some reason your task fails, search this file for errors. Additionally, you can look in the file for any warnings. Any data truncation in your task appears as a warning in the log file. If you need to, you can increase the logging level by using the AWS Command Line Interface (CLI). Troubleshooting The two most common areas people have issues with when working with Oracle as a source and Aurora as a target are: supplemental logging and case sensitivity. Supplemental logging – With Oracle, in order to replication change data supplemental logging must be enabled. However, if you enable supplemental logging at the database level, it sometimes still need to enable it when creating new tables. The best remedy for this is to allow DMS to enable supplemental logging for you using the extra connection attribute: addSupplementalLogging=Y Case sensitivity: Oracle is case-insensitive (unless you use quotes around your object names). However, text appears in uppercase. Thus, AWS DMS defaults to naming your target objects in uppercase. In most cases, you'll want to use transformations to change schema, table and column names to lower case. For more tips, see the AWS DMS troubleshooting section in the AWS DMS User Guide. To troubleshoot issues specific to Oracle, see the Oracle troubleshooting section: http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Oracle To troubleshoot Aurora and MySQL issues, see the MySQL troubleshooting section: http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.MySQL Working with the Sample Database for Migration We recommend working through the preceding outline and guide by using the sample Oracle database provided by Amazon. This database mimics a simple sporting event ticketing system. The scripts to generate the sample database are part of the .tar file located here: https://github.com/awslabs/aws-database-migration-samples. To build the sample database, extract the .tar file and follow the instructions in the README and install files. The sample includes approximately 8-10 GB of data. The sample database also includes the ticketManagment package, which you can use to generate some transactions. To generate transactions, log into SQL*Plus or SQL Developer and run the following as dms_sample: SQL>exec ticketManagement.generateTicketActivity(0.01,1000); The first parameter is the transaction delay in seconds, the second is the number of transactions to generate. The procedure preceding simply “sells tickets” to people. You’ll see updates to the tables: sporting_event_ticket, and ticket_purchase_history. Once you’ve “sold” some tickets, you can transfer them using the command following: SQL>exec ticketManagement.generateTransferActivity(1,100); The first parameter is the transaction delay in seconds, the second is the number of transactions to generate. This procedure also updates sporting_event_ticket and ticket_purchase_history.