Migrate Db2 for LUW to Amazon EC2 by using log shipping to reduce outage time - AWS Prescriptive Guidance

Migrate Db2 for LUW to Amazon EC2 by using log shipping to reduce outage time

Created by Feng Cai (AWS), Ambarish Satarkar (AWS), and Saurabh Sharma (AWS)

Environment: Production

Source: On-premises Db2 for Linux

Target: Db2 on Amazon EC2

R Type: Rehost

Workload: IBM

Technologies: Migration; Databases

AWS services: AWS Direct Connect; Amazon EBS; Amazon EC2; Amazon S3; AWS Site-to-Site VPN

Summary

When customers migrate their IBM Db2 for LUW (Linux, UNIX, and Windows) workloads to Amazon Web Services (AWS), using Amazon Elastic Compute Cloud (Amazon EC2) with the Bring Your Own License (BYOL) model is the fastest way. However, migrating large amounts of data from on-premises Db2 into AWS can be a challenge, especially when the outage window is short. Many customers try to set the outage window to less than 30 minutes, which leaves little time for the database itself.

This pattern covers how to accomplish a Db2 migration with a short outage window by using transaction log shipping. This approach applies to Db2 on a little-endian Linux platform.

Prerequisites and limitations

Prerequisites

  • An active AWS account

  • A Db2 instance running on EC2 instance that matches the on-premises file system layouts

  • An Amazon Simple Storage Service (Amazon S3) bucket accessible to the EC2 instance

  • An AWS Identity and Access Management (IAM) policy and role to make programmatic calls to Amazon S3

  • Synchronized time zone and system clocks on Amazon EC2 and the on-premises server

  • The on-premises network connected to AWS through AWS Site-to-Site VPN or AWS Direct Connect

Limitations

  • The Db2 on-premises instance and Amazon EC2 must be on the same platform family.

  • The Db2 on-premises workload must be logged. To block any unlogged transaction, set blocknonlogged=yes in the database configuration.

Product versions

  • Db2 for LUW version 11.5.9 and later

Architecture

Source technology stack

  • Db2 on Linux x86_64

 Target technology stack

  • Amazon EBS

  • Amazon EC2

  • AWS Identity and Access Management (IAM)

  • Amazon S3

  • AWS Site-to-Site VPN or Direct Connect

Target architecture

The following diagram shows one Db2 instance running on-premises with a virtual private network (VPN) connection to Db2 on Amazon EC2. The dotted lines represent the VPN tunnel between your data center and the AWS Cloud.

Workflow to accomplish a Db2 migration within short outage window using transaction log shipping.

Tools

AWS services

Other tools

  • db2cli is the Db2 interactive CLI command.

Best practices

Epics

TaskDescriptionSkills required

Set environment variables.

This pattern uses the following names:

  • Instance name: db2inst1

  • Database name: SAMPLE

You can change them to fit your environment.

DBA
TaskDescriptionSkills required

Set up the AWS CLI.

To download and install the latest version of the AWS CLI, run the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install
Linux administrator

Set up a local destination for Db2 archive logs.

To keep the target database on Amazon EC2 in sync with the on-premises source database, the latest transaction logs need be retrieved from the source.

In this setup, /db2logs is set by LOGARCHMETH2 on the source as a staging area. The archived logs in this directory will be synced into Amazon S3 and accessed by Db2 on Amazon EC2. The pattern uses LOGARCHMETH2 because LOGARCHMETH1 might have been configured to use a third-party vendor tool that AWS CLI command cannot access. To retrieve the logs, run the following command:

db2 connect to sample db2 update db cfg for SAMPLE using LOGARCHMETH2 disk:/db2logs
DBA

Run an online database backup.

Run an online database backup, and save it to the local backup file system:

db2 backup db sample online to /backup
DBA
TaskDescriptionSkills required

Create an S3 bucket.

Create an S3 bucket for the on-premises server to send the backup Db2 images and log files to on AWS. The bucket will also be accessed by Amazon EC2:

aws s3api create-bucket --bucket logshipmig-db2 --region us-east-1
AWS systems administrator

Create an IAM policy.

The db2bucket.json file contains the IAM policy to access the Amazon S3 bucket:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:GenerateDataKey", "kms:Decrypt", "s3:PutObject", "s3:GetObject", "s3:AbortMultipartUpload", "s3:ListBucket", "s3:DeleteObject", "s3:GetObjectVersion", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::logshipmig-db2/*", "arn:aws:s3:::logshipmig-db2" ] } ] }

To create the policy, use the following AWS CLI command:

aws iam create-policy \ --policy-name db2s3policy \ --policy-document file://db2bucket.json

The JSON output shows the Amazon Resource Name (ARN) for the policy, where aws_account_id represents your account ID:

"Arn": "arn:aws:iam::aws_account_id:policy/db2s3policy"
AWS administrator, AWS systems administrator

Attach the IAM policy to the IAM role used by the EC2 instance.

In most AWS environments, a running EC2 instance has an IAM Role set by your systems administrator. If the IAM role is not set, create the role and choose Modify IAM role on the EC2 console to associate the role with the EC2 instance that hosts the Db2 database. Attach the IAM policy to the IAM role with the policy ARN:

aws iam attach-role-policy \ --policy-arn "arn:aws:iam::aws_account_id:policy/db2s3policy" \ --role-name db2s3role

After the policy is attached, any EC2 instance associated with the IAM role can access the S3 bucket.

AWS administrator, AWS systems administrator
TaskDescriptionSkills required

Configure the AWS CLI on the on-premises Db2 server.

Configure the AWS CLI with the Access Key ID and Secret Access Key generated in the earlier step:

$ aws configure AWS Access Key ID [None]: ************* AWS Secret Access Key [None]: *************************** Default region name [None]: us-east-1 Default output format [None]: json

AWS administrator, AWS systems administrator

Send the backup image to Amazon S3.

Earlier, an online database backup was saved to the /backup local directory. To send that backup image to the S3 bucket, run the following command:

aws s3 sync /backup s3://logshipmig-db2/SAMPLE_backup
AWS administrator, Migration engineer

Send the Db2 archive logs to Amazon S3.

Sync the on-premises Db2 archive logs with the S3 bucket that can be accessed by the target Db2 instance on Amazon EC2:

aws s3 sync /db2logs s3://logshipmig-db2/SAMPLE_LOG

Run this command periodically by using cron or other scheduling tools. The frequency depends on how often the source database archives transaction log files.

AWS administrator, Migration engineer
TaskDescriptionSkills required

Create a PKCS12 keystore.

Db2 uses a Public-Key Cryptography Standards (PKCS) encryption keystore to keep the AWS access key secure. Create a keystore and configure the source Db2 instance to use it:

gsk8capicmd_64 -keydb -create -db "/home/db2inst1/.keystore/db2s3.p12" -pw "<password>" -type pkcs12 -stash db2 "update dbm cfg using keystore_location /home/db2inst1/.keystore/db2s3.p12 keystore_type pkcs12"
DBA

Create the Db2 storage access alias.

To create the storage access alias, use the following script syntax:

db2 "catalog storage access alias <alias_name> vendor S3 server <S3 endpoint> container '<bucket_name>'"

For example, your script might look like the following:

db2 "catalog storage access alias DB2AWSS3 vendor S3 server s3.us-east-1.amazonaws.com container 'logshipmig-db2'"

DBA

Set the staging area.

By default, Db2 uses DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH as the staging area to upload and download files to and from Amazon S3. The default path is sqllib/tmp/RemoteStorage.xxxx under the instance home directory, with xxxx referring to the Db2 partition number. Note that the staging area must have enough capacity to hold the backup images and log files. You can use the registry to point the staging area into a different directory.

We also recommend using DB2_ENABLE_COS_SDK=ON, DB2_OBJECT_STORAGE_SETTINGS=EnableStreamingRestore, and the link to the awssdk library to bypass the Amazon S3 staging area for database backup and restore:

#By root: cp -rp /home/db2inst1/sqllib/lib64/awssdk/RHEL/7.6/* /home/db2inst1/sqllib/lib64/ #By db2 instance owner: db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/db2stage db2set DB2_ENABLE_COS_SDK=ON Db2set DB2_OBJECT_STORAGE_SETTINGS=EnableStreamingRestore db2stop db2start
DBA

Restore the database from the backup image.

Restore the target database on Amazon EC2 from the backup image in the S3 bucket:

db2 restore db sample from DB2REMOTE://DB2AWSS3/logshipmig-db2/SAMPLE_backup replace existing
DBA

Roll forward the database.

After the restore is complete, the target database will be put into rollforward pending state. Configure LOGARCHMETH1 and LOGARCHMETH2 so that Db2 knows where to get the transaction log files:

db2 update db cfg for SAMPLE using LOGARCHMETH1 'DB2REMOTE://DB2AWSS3//SAMPLE_LOGS/' db2 update db cfg for SAMPLE using LOGARCHMETH2 OFF

Start database rollforward:

db2 ROLLFORWARD DATABASE sample to END OF LOGS

This command processes all log files that have been transferred to the S3 bucket. Run it periodically based on the frequency of the s3 sync command on the on-premises Db2 servers. For example, if s3 sync runs at each hour, and it takes 10 minutes to sync all the log files, set the command to run at 10 minutes after each hour.

DBA
TaskDescriptionSkills required

Bring the target database online.

During the cutover window, do one of the following:

  • Put the on-premises database in ADMIN MODE, and run the s3 sync command to force the last transaction log to be archived.

  • Shut down the database.

After the last transaction log is synced into Amazon S3, run the ROLLFORWARD command for the final time:

db2 rollforward DB sample to END OF LOGS db2 rollforward DB sample complete Rollforward Status .... Rollforward status = not pending .... DB20000I The ROLLFORWARD command completed successfully. db2 activate db sample DB20000I The ACTIVATE DATABASE command completed successfully.

Bring the target database online, and point the application connections to Db2 on Amazon EC2.

DBA

Troubleshooting

IssueSolution

If multiple databases have the same instance name and database name on different hosts (DEV, QA, PROD), backups and logs might go to the same subdirectory.

Use different S3 buckets for DEV, QA, and PROD, and add the hostname as subdirectory prefix to avoid confusion.

If there are multiple backup images in the same location, you will get the following error when you restore:

SQL2522N More than one backup file matches the time stamp value provided for the backed up database image.

In the restore command, add the timestamp of the backup:

db2 restore db sample from DB2REMOTE://DB2AWSS3/logshipmig-db2/SAMPLE_backup taken at 20230628164042 replace existing

Related resources