Restoring a backup into an Amazon RDS for MySQL DB instance
Amazon RDS supports importing MySQL databases by using backup files. You can create a backup of your database, store it on Amazon S3, and then restore the backup file onto a new Amazon RDS DB instance running MySQL.
The scenario described in this section restores a backup of an on-premises database. You can use this technique for databases in other locations, such as Amazon EC2 or non-AWS cloud services, as long as the database is accessible.
You can find the supported scenario in the following diagram.
Importing backup files from Amazon S3 is supported for MySQL in all AWS Regions.
We recommend that you import your database to Amazon RDS by using backup files if your on-premises database can be offline while the backup file is created, copied, and restored. If your database can't be offline, you can use binary log (binlog) replication to update your database after you have migrated to Amazon RDS through Amazon S3 as explained in this topic. For more information, see Configuring binary log file position replication with an external source instance. You can also use the AWS Database Migration Service to migrate your database to Amazon RDS. For more information, see What is AWS Database Migration Service?
Overview of setting up to import backup files from Amazon S3 to Amazon RDS
These are the components you need to set up to import backup files from Amazon S3 to Amazon RDS:
An Amazon S3 bucket to store your backup files.
A backup of your on-premises database created by Percona XtraBackup.
An AWS Identity and Access Management (IAM) role to allow Amazon RDS to access the bucket.
If you already have an Amazon S3 bucket, you can use that. If you don't have an Amazon S3 bucket, you can create a new one. If you want to create a new bucket, see Creating a bucket.
Use the Percona XtraBackup tool to create your backup. For more information, see Creating your database backup.
If you already have an IAM role, you can use that. If you don't have an IAM role, you can create a new one manually. Alternatively, you can choose to have a new IAM role created for you in your account by the wizard when you restore the database by using the AWS Management Console. If you want to create a new IAM role manually, or attach trust and permissions policies to an existing IAM role, see Creating an IAM role manually. If you want to have a new IAM role created for you, follow the procedure in Console.
Creating your database backup
Use the Percona XtraBackup software to create your backup. We recommend that you use
the latest version of Percona XtraBackup. You can install Percona XtraBackup from Download Percona
XtraBackup
Warning
When creating a database backup, XtraBackup might save credentials in the xtrabackup_info file.
Make sure you examine that file so that the tool_command
setting in it doesn't contain any
sensitive information.
Note
For a MySQL 8.4 migration, you must use Percona XtraBackup 8.4.
For MySQL 8.0 migration, you must use Percona XtraBackup 8.0. Percona XtraBackup 8.0.12 and higher versions support migration of all versions of MySQL 8.0. If you are migrating to RDS for MySQL 8.0.32 or higher, you must use Percona XtraBackup 8.0.12 or higher.
For MySQL 5.7 migrations, you can also use Percona XtraBackup 2.4. For migrations of earlier MySQL versions, you can also use Percona XtraBackup 2.3 or 2.4.
You can create a full backup of your MySQL database files using Percona XtraBackup. Alternatively, if you already use Percona XtraBackup to back up your MySQL database files, you can upload your existing full and incremental backup directories and files.
For more information about backing up your database with Percona XtraBackup, see
Percona XtraBackup - documentation
Creating a full backup with Percona XtraBackup
To create a full backup of your MySQL database files that can be restored from Amazon S3,
use the Percona XtraBackup utility (xtrabackup
) to back up your database.
For example, the following command creates a backup of a MySQL database
and stores the files in the folder /on-premises/s3-restore/backup
folder.
xtrabackup --backup --user=
<myuser>
--password=<password>
--target-dir=</on-premises/s3-restore/backup>
If you want to compress your backup into a single file (which can be split later, if needed), you can save your backup in one of the following formats:
Gzip (.gz)
tar (.tar)
Percona xbstream (.xbstream)
Note
Percona XtraBackup 8.0 only supports Percona xbstream for compression.
The following command creates a backup of your MySQL database split into multiple Gzip files.
xtrabackup --backup --user=
<myuser>
--password=<password>
--stream=tar \ --target-dir=</on-premises/s3-restore/backup>
| gzip - | split -d --bytes=500MB \ -</on-premises/s3-restore/backup/backup>
.tar.gz
The following command creates a backup of your MySQL database split into multiple tar files.
xtrabackup --backup --user=
<myuser>
--password=<password>
--stream=tar \ --target-dir=</on-premises/s3-restore/backup>
| split -d --bytes=500MB \ -</on-premises/s3-restore/backup/backup>
.tar
The following command creates a backup of your MySQL database split into multiple xbstream files.
xtrabackup --backup --user=
<myuser>
--password=<password>
--stream=xbstream \ --target-dir=</on-premises/s3-restore/backup>
| split -d --bytes=500MB \ -</on-premises/s3-restore/backup/backup>
.xbstream
Note
If you see the following error, it might be caused by mixing file formats in your command:
ERROR:/bin/tar: This does not look like a tar archive
Using incremental backups with Percona XtraBackup
If you already use Percona XtraBackup
to perform full and incremental backups of your MySQL database files,
you don't need to create a full backup and upload the backup files to Amazon S3.
Instead, you can save a significant amount of time
by copying your existing backup directories and files to your Amazon S3 bucket.
For more information about creating incremental backups using Percona XtraBackup, see
Incremental backup
When copying your existing full and incremental backup files to an Amazon S3 bucket,
you must recursively copy the contents of the base directory.
Those contents include the full backup
and also all incremental backup directories and files.
This copy must preserve the directory structure in the Amazon S3 bucket.
Amazon RDS iterates through all files and directories.
Amazon RDS uses the xtrabackup-checkpoints
file
that is included with each incremental backup
to identify the base directory,
and to order incremental backups by log sequence number (LSN) range.
Backup considerations for Percona XtraBackup
Amazon RDS consumes your backup files based on the file name.
Name your backup files with the appropriate file extension
based on the file format—for example,
.xbstream
for files stored using the Percona xbstream format.
Amazon RDS consumes your backup files in alphabetical order and also in natural number order.
Use the split
option when you issue the xtrabackup
command
to ensure that your backup files are written and named in the proper order.
Amazon RDS doesn't support partial backups created using Percona XtraBackup. You can't
use the following options to create a partial backup when you back up the source files
for your database: --tables
, --tables-exclude
, --tables-file
,
--databases
, --databases-exclude
, or --databases-file
.
Amazon RDS supports incremental backups created using Percona XtraBackup. For more
information about creating incremental backups using Percona XtraBackup, see Incremental backup
Creating an IAM role manually
If you don't have an IAM role, you can create a new one manually. However, if you restore the database by using the AWS Management Console, we recommend that you follow the procedure in Console and choose to have RDS create this new IAM role for you.
To manually create a new IAM role for importing your database from Amazon S3, create a role to delegate permissions from Amazon RDS to your Amazon S3 bucket. When you create an IAM role, you attach trust and permissions policies. To import your backup files from Amazon S3, use trust and permissions policies similar to the following examples. For more information about creating the role, see Creating a role to delegate permissions to an AWS service.
The trust and permissions policies require that you provide an Amazon Resource Name (ARN). For more information about ARN formatting, see Amazon Resource Names (ARNs) and AWS service namespaces.
Example trust policy for importing from Amazon S3
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Service": "rds.amazonaws.com"}, "Action": "sts:AssumeRole" }] }
Example permissions policy for importing from Amazon S3 — IAM user permissions
In the following example, replace iam_user_id
with your own value.
{ "Version":"2012-10-17", "Statement": [ { "Sid":"AllowS3AccessRole", "Effect":"Allow", "Action":"iam:PassRole", "Resource":"arn:aws:iam::
iam_user_id
:role/S3Access" } ] }
Example permissions policy for importing from Amazon S3 — role permissions
In the following example, replace amzn-s3-demo-bucket
and prefix
with your own values.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::
amzn-s3-demo-bucket
" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket
/prefix
*" }, { // If your bucket is encrypted, include the following permission. This permission allows decryption of your AWS KMS key. "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": [ "arn:aws:kms:region
:customer_id
:key/key_id
*" ] } ] }
Note
If you include a file name prefix, include the asterisk (*) after the prefix. If you don't want to specify a prefix, specify only an asterisk.
Importing data from Amazon S3 to a new MySQL DB instance
You can import data from Amazon S3 to a new MySQL DB instance using the AWS Management Console, AWS CLI, or RDS API.
To import data from Amazon S3 to a new MySQL DB instance
-
Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/
. -
In the top right corner of the Amazon RDS console, choose the AWS Region in which to create your DB instance. Choose the same AWS Region as the Amazon S3 bucket that contains your database backup.
-
In the navigation pane, choose Databases.
-
Choose Restore from S3.
The Create database by restoring from S3 page appears.
-
Under S3 destination:
-
Choose the S3 bucket that contains the backup.
-
(Optional) For S3 prefix, enter the file path prefix for the files stored in your Amazon S3 bucket.
If you don't specify a prefix, then RDS creates your DB instance using all of the files and folders in the root folder of the S3 bucket. If you do specify a prefix, then RDS creates your DB instance using the files and folders in the S3 bucket where the path for the file begins with the specified prefix.
For example, suppose that you store your backup files on S3 in a subfolder named backups, and you have multiple sets of backup files, each in its own directory (gzip_backup1, gzip_backup2, and so on). In this case, you specify a prefix of backups/gzip_backup1 to restore from the files in the gzip_backup1 folder.
-
-
Under Engine options:
-
For Engine type, choose MySQL.
-
For Source engine version, choose the MySQL major version of your source database.
-
For Engine Version, choose the default minor version of your MySQL major version in your AWS Region.
In the AWS Management Console, only the default minor version is available. You can upgrade your DB instance after importing.
-
-
For IAM role, create or choose IAM role with the required trust policy and permissions policy that allows Amazon RDS to access your Amazon S3 bucket. Perform one of the following actions:
(Recommended) Choose Create a new role, and enter the IAM role name. With this option, RDS automatically creates the role with the trust policy and permissions policy for you.
Choose an existing IAM role. Make sure that this role meets all of the criteria in Creating an IAM role manually.
-
Specify your DB instance information. For information about each setting, see Settings for DB instances.
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation can succeed.
You can also choose Enable storage autoscaling to allow for future growth automatically.
-
Choose additional settings as needed.
-
Choose Create database.
To import data from Amazon S3 to a new MySQL DB instance by using the AWS CLI, run the restore-db-instance-from-s3 command with the following options. For information about each setting, see Settings for DB instances.
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation can succeed.
You can also use the --max-allocated-storage
option to enable
storage autoscaling and allow for future growth automatically.
--allocated-storage
--db-instance-identifier
--db-instance-class
--engine
--master-username
--manage-master-user-password
--s3-bucket-name
--s3-ingestion-role-arn
--s3-prefix
--source-engine
--source-engine-version
Example
For Linux, macOS, or Unix:
aws rds restore-db-instance-from-s3 \ --allocated-storage
250
\ --db-instance-identifiermyidentifier
\ --db-instance-classdb.m5.large
\ --enginemysql
\ --master-usernameadmin
\ --manage-master-user-password \ --s3-bucket-nameamzn-s3-demo-bucket
\ --s3-ingestion-role-arnarn:aws:iam::account-number:role/rolename
\ --s3-prefixbucketprefix
\ --source-enginemysql
\ --source-engine-version8.0.32
\ --max-allocated-storage1000
For Windows:
aws rds restore-db-instance-from-s3 ^ --allocated-storage
250
^ --db-instance-identifiermyidentifier
^ --db-instance-classdb.m5.large
^ --enginemysql
^ --master-usernameadmin
^ --manage-master-user-password ^ --s3-bucket-nameamzn-s3-demo-bucket
^ --s3-ingestion-role-arnarn:aws:iam::account-number:role/rolename
^ --s3-prefixbucketprefix
^ --source-enginemysql
^ --source-engine-version8.0.32
^ --max-allocated-storage1000
To import data from Amazon S3 to a new MySQL DB instance by using the Amazon RDS API, call the RestoreDBInstanceFromS3 operation.
Limitations and recommendations for importing backup files from Amazon S3 to Amazon RDS
The following are some limitations and recommendations for importing backup files from Amazon S3:
-
You can only import your data to a new DB instance, not an existing DB instance.
-
You must use Percona XtraBackup to create the backup of your on-premises database.
-
You can't import data from a DB snapshot export to Amazon S3.
-
You can't migrate from a source database that has tables defined outside of the default MySQL data directory.
-
You must import your data to the default minor version of your MySQL major version in your AWS Region. For example, if your major version is MySQL 8.0, and the default minor version for your AWS Region is 8.0.35, then you must import your data into a MySQL version 8.0.35 DB instance. You can upgrade your DB instance after importing. For information about determining the default minor version, see MySQL on Amazon RDS versions.
-
Backward migration is not supported for both major versions and minor versions. For example, you can't migrate from version 8.0 to version 5.7, and you can't migrate from version 8.0.32 to version 8.0.31.
-
You can't import a MySQL 5.5 or 5.6 database.
-
You can't import an on-premises MySQL database from one major version to another. For example, you can't import a MySQL 5.7 database to an RDS for MySQL 8.0 database. You can upgrade your DB instance after you complete the import.
-
You can't restore from an encrypted source database, but you can restore to an encrypted Amazon RDS DB instance.
-
You can't restore from an encrypted backup in the Amazon S3 bucket.
-
You can't restore from an Amazon S3 bucket in a different AWS Region than your Amazon RDS DB instance.
-
Importing from Amazon S3 is not supported on the db.t2.micro DB instance class. However, you can restore to a different DB instance class, and change the DB instance class later. For more information about instance classes, see Hardware specifications for DB instance classes.
-
Amazon S3 limits the size of a file uploaded to an Amazon S3 bucket to 5 TB. If a backup file exceeds 5 TB, then you must split the backup file into smaller files.
-
When you restore the database, the backup is copied and then extracted on your DB instance. Therefore, provision storage space for your DB instance that is equal to or greater than the sum of the backup size, plus the original database's size on disk.
-
Amazon RDS limits the number of files uploaded to an Amazon S3 bucket to 1 million. If the backup data for your database, including all full and incremental backups, exceeds 1 million files, use a Gzip (.gz), tar (.tar.gz), or Percona xbstream (.xbstream) file to store full and incremental backup files in the Amazon S3 bucket. Percona XtraBackup 8.0 only supports Percona xbstream for compression.
-
User accounts are not imported automatically. Save your user accounts from your source database and add them to your new DB instance later.
-
Functions are not imported automatically. Save your functions from your source database and add them to your new DB instance later.
-
Stored procedures are not imported automatically. Save your stored procedures from your source database and add them to your new DB instance later.
-
Time zone information is not imported automatically. Record the time zone information for your source database, and set the time zone of your new DB instance later. For more information, see Local time zone for MySQL DB instances.
-
The
innodb_data_file_path
parameter must be configured with only one data file that uses the default data file name"ibdata1:12M:autoextend"
. Databases with two data files, or with a data file with a different name, can't be migrated using this method.The following are examples of file names that are not allowed:
"innodb_data_file_path=ibdata1:50M; ibdata2:50M:autoextend"
and"innodb_data_file_path=ibdata01:50M:autoextend"
. -
The maximum size of the restored database is the maximum database size supported minus the size of the backup. So, if the maximum database size supported is 64 TiB, and the size of the backup is 30 TiB, then the maximum size of the restored database is 34 TiB, as in the following example:
64 TiB - 30 TiB = 34 TiB
For information about the maximum database size supported by Amazon RDS for MySQL, see General Purpose SSD storage and Provisioned IOPS SSD storage.