Oracle Data Pump and MySQL mysqldump and mysql
With AWS DMS, you can migrate data between different database platforms, including Oracle and MySQL, using tools like Oracle Data Pump and MySQL mysqldump and mysql. Oracle Data Pump provides a streamlined way to move data and metadata from one Oracle database to another, while mysqldump and mysql are utilities for backing up and restoring MySQL databases.
Feature compatibility | AWS SCT / AWS DMS automation level | AWS SCT action code index | Key differences |
---|---|---|---|
|
N/A |
N/A |
Non-compatible tool. |
Oracle usage
Oracle Data Pump is a utility for exporting and importing data from/to an Oracle database. It can be used to copy an entire database, entire schemas, or specific objects in a schema. Oracle Data Pump is commonly used as a part of a backup strategy for restoring individual database objects (specific records, tables, views, stored procedures, and so on) as opposed to snapshots or Oracle RMAN, which provides backup and recovery capabilities at the database level. By default (without using the sqlfile
parameter during export), the dump file generated by Oracle Data Pump is binary (it can’t be opened using a text editor).
Oracle Data Pump supports:
-
Export data from an Oracle database — The Data Pump
EXPDP
command creates a binary dump file containing the exported database objects. Objects can be exported with data or with metadata only. Exports can be performed for specific timestamps or Oracle SCNs to ensure cross-object consistency. -
Import data to an Oracle database — The Data Pump
IMPDP
command imports objects and data from a specific dump file created with theEXPDP
command. TheIMPDP
command can filter on import (for example, only import certain objects) and remap object and schema names during import.
The term logical backup refers to a dump file created by Oracle Data Pump.
EXPDP
and IMPDP
can only read or write dump files from file system paths that were pre-configured in the Oracle database as directories. During export or import, users must specify the logical directory name where the dump file should be created; not the actual file system path.
Examples
Use EXPDP
to export the HR
schema.
$ expdp system/**** directory=expdp_dir schemas=hr dumpfile=hr.dmp logfile=hr.log
The command contains the credentials to run Data Pump, the logical Oracle directory name for the dump file location (which maps in the database to a physical file system location), the schema name to export, the dump file name, and log file name.
Use IMPDP
to import the HR
a schema and rename to HR_COPY
.
$ impdp system/**** directory=expdp_dir schemas=hr dumpfile=hr.dmp logfile=hr.log REMAP_SCHEMA=hr:hr_copy
The command contains the database credentials to run Data Pump, the logical Oracle directory for where the export dump file is located, the dump file name, the schema to export, the name for the dump file, the log file name, and the REMAP_SCHEMA
parameter.
For more information, see Oracle Data Pump
MySQL usage
MySQL provides two native utilities — mysqldump and mysqlimport. You can use these utilities to perform logical database exports and imports. The functionality is comparable to Oracle’s Data Pump utility; however, in some use cases, the mysql connection utility is more equivalent to Oracle Data Pump import tool impdp
. You can use these utilities to move data between two different databases or to create logical database backups.
To explain the difference between mysql and mysqlimport utilities, the equivalent Oracle reference will be used.
Use the mysql utility to interact with the database like SQL*Plus. For import purposes, you can run it with all of the CREATE
and INSERT
commands to rebuild your schema and insert data just like in Oracle.
The mysqlimport utility reads a comma-separated value (CSV) data file and is equivalent to SQL*Loader. This utility is a reference to the LOAD DATA
command. It is mostly used to move schema or some of the objects between clusters. You should use this utility if you have a data file (not a script) and you want to load it fast.
-
mysqldump is equivalent to Oracle
expdp
. -
mysql is equivalent to Oracle
impdp
.
Amazon Aurora MySQL supports data export and import using mysqldump, mysqlimport, or mysql creation scripts. The binaries for all utilities must be installed on your local workstation or on an Amazon EC2 server.
After export, MySQL dump files created using mysqldump can be copied to an Amazon S3 bucket. Later, when the dump files are needed for database restore, they can be copied back to a desktop/server with a MySQL client (such as your workstation or an Amazon EC2 server) to use mysqlimport.
-
mysqldump creates consistent backups only if using the
--single-transaction
option. -
mysqldump does not block other readers or writers accessing the database.
-
Unlike Data Pump, mysqldump files are plain-text.
Examples
Export data using mysqldump.
mysqldump --column-statistics=0 DATABASE_TO_RESTORE -h INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p > /local_path/backup-file.sql
Note
In Amazon Relational Database Service (Amazon RDS) for MySQL version 8.0, make sure that the column_statistics
flag set to 0 if you use binaries when running the mysqldump.
Run an export and copy the backup file to an Amazon S3 bucket using a pipe and the AWS CLI.
mysqldump --column-statistics=0 DATABASE_NAME -h MYSQL_INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p > /local_path/backup-file.sql | aws s3 cp - s3://mysql-backups/mysql_bck-$(date"+%Y-%m-%d-%H-%M-%S")
Import data using mysql.
mysql DB_NAME -h MYSQL_INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p < /local_path/backupfile.sql
Copy the output file from the local server to an Amazon S3 Bucket using the AWS CLI.
aws s3 cp /local_path/backup-file.sql s3://my-bucket/backup-$(date "+%Y-%m-%d-%H-%M-%S")
In the preceding example, the {-$(date "+%Y-%m-%d-%H-%M-%S")}
format is valid on Linux servers only.
Download the output file from the S3 bucket.
$ aws s3 cp s3://my-bucket/backup-2017-09-10-01-10-10 /local_path/backup-file.sql
Summary
Description | Oracle Data Pump | MySQL Dump |
---|---|---|
Export data to a local file |
expdp system/*** schemas=hr dumpfile=hr.dmp logfile=hr.log |
mysqldump --column-statistics=0 DATABASE_TO_RESTORE -h INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p > /local_path/backup-file.sql |
Export data to a remote file |
Create an Oracle directory on remote storage mount or NFS directory called EXP_DIR. Use the export command: expdp system/*** schemas=hr directory=EXP_DIR dumpfile=hr.dmp logfile=hr.log |
mysqldump --column-statistics=0 DATABASE_NAME -h MYSQL_INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p > /local_path/backup-file.sql | aws s3 cp - s3://mysql-backups/mysql_bck-$( date"+%Y-%m-%d-%H-%M-%S") |
Import data to a new database with a new name |
impdp system/*** schemas=hr dumpfile=hr.dmp logfile=hr.log REMAP_SCHEMA=hr:hr_copy TRANSFORMM=OID:N |
mysql DB_NAME -h MYSQL_INSTANCE_ENDPOINT -P 3306 -u USER_NAME -p < /local_path/backup-file.sql |
For more information, see mysqldump — A Database Backup Program