

# Rehost
<a name="migration-rehost-pattern-list"></a>

**Topics**
+ [Accelerate the discovery and migration of Microsoft workloads to AWS](accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.md)
+ [Create an approval process for firewall requests during a rehost migration to AWS](create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws.md)
+ [Ingest and migrate EC2 Windows instances into an AWS Managed Services account](ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.md)
+ [Migrate a Couchbase Server database to Amazon EC2](migrate-couchbase-server-ec2.md)
+ [Migrate Db2 for LUW to Amazon EC2 by using log shipping to reduce outage time](migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time.md)
+ [Migrate Db2 for LUW to Amazon EC2 with high availability disaster recovery](migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery.md)
+ [Migrate IIS-hosted applications to Amazon EC2 by using appcmd.exe](migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.md)
+ [Migrate an on-premises Microsoft SQL Server database to Amazon EC2 using Application Migration Service](migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.md)
+ [Migrate an F5 BIG-IP workload to F5 BIG-IP VE on the AWS Cloud](migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud.md)
+ [Migrate an on-premises Go web application to AWS Elastic Beanstalk by using the binary method](migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method.md)
+ [Migrate an on-premises SFTP server to AWS using AWS Transfer for SFTP](migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp.md)
+ [Migrate an on-premises VM to Amazon EC2 by using AWS Application Migration Service](migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.md)
+ [Migrate small sets of data from on premises to Amazon S3 using AWS SFTP](migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp.md)
+ [Migrate an on-premises Oracle database to Oracle on Amazon EC2](migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2.md)
+ [Migrate an on-premises Oracle database to Amazon EC2 by using Oracle Data Pump](migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump.md)
+ [Migrate RHEL BYOL systems to AWS License-Included instances by using AWS MGN](migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.md)
+ [Migrate an on-premises Microsoft SQL Server database to Amazon EC2](migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2.md)
+ [Rehost on-premises workloads in the AWS Cloud: migration checklist](rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist.md)
+ [Set up Multi-AZ infrastructure for a SQL Server Always On FCI by using Amazon FSx](set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.md)
+ [Use BMC Discovery queries to extract migration data for migration planning](use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning.md)

# Accelerate the discovery and migration of Microsoft workloads to AWS
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws"></a>

*Ali Alzand, Amazon Web Services*

## Summary
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-summary"></a>

This pattern shows you how to use the [Migration Validator Toolkit PowerShell module](https://github.com/aws-samples/migration-validator-toolkit-for-microsoft-workloads) to discover and migrate your Microsoft workloads to AWS. The module works by performing multiple checks and validations for common tasks associated with any Microsoft workload. For example, the module checks for instances that might have multiple disks attached to it or instances that use many IP addresses. For a full list of checks that the module can perform, see the [Checks](https://github.com/aws-samples/migration-validator-toolkit-for-microsoft-workloads#checks) section on the module's GitHub page.

The Migration Validator Toolkit PowerShell module can help your organization reduce the time and effort involved in discovering what applications and services are running on your Microsoft workloads. The module can also help you identify the configurations of your workloads so that you can find out if your configurations are supported on AWS. The module also provides recommendations for next steps and mitigation actions, so that you can avoid any misconfigurations before, during, or after your migration.

## Prerequisites and limitations
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-prereqs"></a>

**Prerequisites**
+ Local administrator account
+ PowerShell 4.0

**Limitations**
+ Works only on Microsoft Windows Server 2012 R2 or later

## Tools
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-tools"></a>

**Tools**
+ PowerShell 4.0

**Code repository**

The Migration Validator Toolkit PowerShell module for this pattern is available in the GitHub [migration-validator-toolkit-for-microsoft-workloads](https://github.com/aws-samples/migration-validator-toolkit-for-microsoft-workloads) repository.

## Epics
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-epics"></a>

### Run the Migration Validator Toolkit PowerShell module on a single target
<a name="run-the-migration-validator-toolkit-powershell-module-on-a-single-target"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download, extract, import, and invoke the module. | Choose one of the following methods to download and deploy the module:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html)**Run the PowerShell script**In PowerShell, run the following example code:<pre>#MigrationValidatorToolkit<br />$uri = 'https://github.com/aws-samples/migration-validator-toolkit-for-microsoft-workloads/archive/refs/heads/main.zip'<br />$destination = (Get-Location).Path<br />if ((Test-Path -Path "$destination\MigrationValidatorToolkit.zip" -PathType Leaf) -or (Test-Path -Path "$destination\MigrationValidatorToolkit")) {<br />    write-host "File $destination\MigrationValidatorToolkit.zip or folder $destination\MigrationValidatorToolkit found, exiting"<br />}else {<br />    Write-host "Enable TLS 1.2 for this PowerShell session only."<br />    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12<br />    $webClient = New-Object System.Net.WebClient<br />    Write-host "Downloading MigrationValidatorToolkit.zip"<br />    $webClient.DownloadFile($uri, "$destination\MigrationValidatorToolkit.zip")<br />    Write-host "MigrationValidatorToolkit.zip download successfully"<br />    Add-Type -Assembly "system.io.compression.filesystem"<br />    [System.IO.Compression.ZipFile]::ExtractToDirectory("$destination\MigrationValidatorToolkit.zip","$destination\MigrationValidatorToolkit")<br />    Write-host "Extracting MigrationValidatorToolkit.zip complete successfully"<br />    Import-Module "$destination\MigrationValidatorToolkit\migration-validator-toolkit-for-microsoft-workloads-main\MigrationValidatorToolkit.psm1"; Invoke-MigrationValidatorToolkit<br />}</pre>The code downloads the module from a .zip file. Then, the code extracts, imports, and invokes the module.**Download and extract the .zip file**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html)**Clone the GitHub repository**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html) | System Administrator | 
| Invoke the module manually. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html)[Format-Table](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/format-table?view=powershell-7.3) format:<pre>Import-Module .\MigrationValidatorToolkit.psm1;Invoke-MigrationValidatorToolkit</pre>[Format-List](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/format-list?view=powershell-7.3) format:<pre>Import-Module .\MigrationValidatorToolkit.psm1;Invoke-MigrationValidatorToolkit -List</pre>[Out-GridView](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/out-gridview?view=powershell-7.3) format:<pre>Import-Module .\MigrationValidatorToolkit.psm1;Invoke-MigrationValidatorToolkit -GridView</pre>[ConvertTo-Csv](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/convertto-csv?view=powershell-7.3) format:<pre>Import-Module .\MigrationValidatorToolkit.psm1;Invoke-MigrationValidatorToolkit -csv</pre> | System Administrator | 

### Run the Migration Validator Toolkit PowerShell module on multiple targets
<a name="run-the-migration-validator-toolkit-powershell-module-on-multiple-targets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the .zip file or clone the GitHub repository. | Choose one of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html)<pre>git clone https://github.com/aws-samples/migration-validator-toolkit-for-microsoft-workloads.git</pre> | System Administrator | 
| Update the server.csv list. | If you downloaded the .zip file, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html) | System Administrator | 
| Invoke the module. | You can use any computer within the domain that uses a domain user that has administrator access to target computers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html)<pre>Import-Module .\MigrationValidatorToolkit.psm1;Invoke-DomainComputers</pre>The output .csv file is saved in `MigrationValidatorToolkit\Outputs\folder` with the prefix name `DomainComputers_MigrationAutomations_YYYY-MM-DDTHH-MM-SS`. | System Administrator | 

## Troubleshooting
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `MigrationValidatorToolkit` writes information about executions, commands, and errors to log files on the running host. | You can view log files manually in the following location:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws.html) | 

## Related resources
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-resources"></a>
+ [Options, tools, and best practices for migrating Microsoft workloads to AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-microsoft-workloads-aws/introduction.html) (AWS Prescriptive Guidance)
+ [Microsoft migration patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migration-migration-patterns-by-workload-microsoft-pattern-list.html) (AWS Prescriptive Guidance)
+ [Free Cloud Migration Services on AWS](https://aws.amazon.com/free/migration/) (AWS documentation)
+ [Predefined post-launch actions](https://docs.aws.amazon.com/mgn/latest/ug/predefined-post-launch-actions.html) (Application marketing documentation)

## Additional information
<a name="accelerate-the-discovery-and-migration-of-microsoft-workloads-to-aws-additional"></a>

**Frequently asked questions**

*Where can I run the Migration Validator Toolkit PowerShell module?*

You can run the module on Microsoft Windows Server 2012 R2 or later.

*When do I run this module?*

We recommend that you run the module during the [assess phase](https://aws.amazon.com/cloud-migration/how-to-migrate/) of the migration journey.

*Does the module modify my existing servers?*

No. All actions in this module are read-only.

*How long does it take to run the module?*

It typically takes 1–5 minutes to run the module, but it depends on the resource allocation of your server.

*What permissions does the module need to run?*

You must run the module from a local administrator account.

*Can I run the module on physical servers?*

Yes, as long as the operating system is Microsoft Windows Server 2012 R2 or later.

*How do I run the module at scale for multiple servers?*

To run the module on multiple domain-joined computers at scale, follow the steps in the *Run the Migration Validator Toolkit PowerShell module on multiple targets *epic of this guide. For non domain-joined computers, use a remote invocation or run the module locally by following the steps in the *Run the Migration Validator Toolkit PowerShell module on a single target* epic of this guide.

# Create an approval process for firewall requests during a rehost migration to AWS
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws"></a>

*Srikanth Rangavajhala, Amazon Web Services*

## Summary
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws-summary"></a>

If you want to use [AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) or [Cloud Migration Factory on AWS](https://aws.amazon.com/solutions/implementations/cloud-migration-factory-on-aws/) for a rehost migration to the AWS Cloud, one of the prerequisites is that you must keep TCP ports 443 and 1500 open. Typically, opening these firewall ports requires approval from your information security (InfoSec) team.

This pattern outlines the process to obtain a firewall request approval from an InfoSec team during a rehost migration to the AWS Cloud. You can use this process to avoid rejections of your firewall request by the InfoSec team, which can become expensive and time consuming. The firewall request process has two review and approval steps between AWS migration consultants and leads who work with your InfoSec and application teams to open the firewall ports.

This pattern assumes that you are planning a rehost migration with AWS consultants or migration specialists from your organization. You can use this pattern if your organization doesn’t have a firewall approval process or firewall request blanket approval form. For more information about this, see the *Limitations* section of this pattern. For more information on network requirements for Application Migration Service, see [Network requirements](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html) in the Application Migration Service documentation.

## Prerequisites and limitations
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws-prereqs"></a>

**Prerequisites **
+ A planned rehost migration with AWS consultants or migration specialists from your organization
+ The required port and IP information to migrate the stack
+ Existing and future state architecture diagrams
+ Firewall information about the on-premises and destination infrastructure, ports, and zone-to-zone traffic flow
+ A firewall request review checklist (attached)
+ A firewall request document, configured according to your organization’s requirements
+ A contact list for the firewall reviewers and approvers, including the following roles:
  + **Firewall request submitter** – AWS migration specialist or consultant. The firewall request submitter can also be a migration specialist from your organization.
  + **Firewall request reviewer** – Typically, this is the single point of contact (SPOC) from AWS.
  + **Firewall request approver** – An InfoSec team member.

**Limitations **
+ This pattern describes a generic firewall request approval process. Requirements can vary for individual organizations.
+ Make sure that you track changes to your firewall request document.

The following table shows the use cases for this pattern.


| 
| 
| Does your organization have an existing firewall approval process? | Does your organization have an existing firewall request form?  | Suggested action | 
| --- |--- |--- |
| Yes | Yes | Collaborate with AWS consultants or your migration specialists to implement your organization’s process. | 
| No | Yes | Use this pattern’s firewall approval process. Use either an AWS consultant or a migration specialist from your organization to submit the firewall request blanket approval form. | 
| No | No | Use this pattern’s firewall approval process. Use either an AWS consultant or a migration specialist from your organization to submit the firewall request blanket approval form. | 

## Architecture
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws-architecture"></a>

The following diagram shows the steps for the firewall request approval process.

![\[Process for firewall request approval from an InfoSec team during a rehost migration to AWS Cloud.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/cf9b58ad-ab6f-43d3-92da-968529c8d042/images/c672f7ce-6e9f-4dbc-bf2c-4272a6c4432b.png)


## Tools
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws-tools"></a>

You can use scanner tools such as [Palo Alto Networks](https://www.paloaltonetworks.com/) or [SolarWinds](https://www.solarwinds.com/) to analyze and validate firewalls and IP addresses.

## Epics
<a name="create-an-approval-process-for-firewall-requests-during-a-rehost-migration-to-aws-epics"></a>

### Analyze the firewall request
<a name="analyze-the-firewall-request"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Analyze the ports and IP addresses. | The firewall request submitter completes an initial analysis to understand the required firewall ports and IP addresses. After this is complete, they request that your InfoSec team open the required ports and map the IP addresses. | AWS Cloud engineer, migration specialist | 

### Validate the firewall request
<a name="validate-the-firewall-request"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the firewall information. | The AWS Cloud engineer schedules a meeting with your InfoSec team. During this meeting, the engineer examines and validates the firewall request information.Typically, the firewall request submitter is the same person as the firewall requester. This validation phase can become iterative based on the feedback given by the approver if anything is observed or recommended. | AWS Cloud engineer, migration specialist | 
| Update the firewall request document. | After the InfoSec team shares their feedback, the firewall request document is edited, saved, and re-uploaded. This document is updated after each iteration.We recommend that you store this document in a version-controlled storage folder. This means that all changes are tracked and correctly applied. | AWS Cloud engineer, migration specialist | 

### Submit the firewall request
<a name="submit-the-firewall-request"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Submit the firewall request. | After the firewall request approver has approved the firewall blanket approval request, the AWS Cloud engineer submits the firewall request. The request specifies the ports that must be open and IP addresses that are required to map and update the AWS account.You can make suggestions or provide feedback after the firewall request is submitted. We recommend that you automate this feedback process and send any edits through a defined workflow mechanism.  | AWS Cloud engineer, migration specialist | 

## Attachments
<a name="attachments-cf9b58ad-ab6f-43d3-92da-968529c8d042"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/cf9b58ad-ab6f-43d3-92da-968529c8d042/attachments/attachment.zip)

# Ingest and migrate EC2 Windows instances into an AWS Managed Services account
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account"></a>

*Anil Kunapareddy and Venkatramana Chintha, Amazon Web Services*

## Summary
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-summary"></a>

This pattern explains the step-by-step process of migrating and ingesting Amazon Elastic Compute Cloud (Amazon EC2) Windows instances into an Amazon Web Services (AWS) Managed Services (AMS) account. AMS can help you manage the instance more efficiently and securely. AMS provides operational flexibility, enhances security and compliance, and helps you optimize capacity and reduce costs.

This pattern starts with an EC2 Windows instance that you have migrated to a staging subnet in your AMS account. A variety of migration services and tools are available to perform this task, such as AWS Application Migration Service.

To make a change to your AMS-managed environment, you create and submit a request for change (RFC) for a particular operation or action. Using an AMS workload ingest (WIGS) RFC, you ingest the instance into the AMS account and create a custom Amazon Machine Image (AMI). You then create the AMS-managed EC2 instance by submitting another RFC to create an EC2 stack. For more information, see [AMS Workload Ingest](https://docs.aws.amazon.com/managedservices/latest/appguide/ams-workload-ingest.html) in the AMS documentation.

## Prerequisites and limitations
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-prereqs"></a>

**Prerequisites**
+ An active, AMS-managed AWS account
+ An existing landing zone
+ Permissions to make changes in the AMS-managed VPC
+ An Amazon EC2 Windows instance in a staging subnet in your AMS account
+ Completion of the [general prerequisites](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-migrate-instance-prereqs.html) for migrating workloads using AMS WIGS
+ Completion of the [Windows prerequisites](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-migrate-prereqs-win.html) for migrating workloads using AMS WIGS

**Limitations**
+ This pattern is for EC2 instances operating Windows Server. This pattern doesn’t apply to instances running other operating systems, such as Linux.

## Architecture
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-architecture"></a>

**Source technology stack**

Amazon EC2 Windows instance in a staging subnet in your AMS account

**Target technology stack**

Amazon EC2 Windows instance managed by AWS Managed Services (AMS)

**Target architecture**

![\[Process to migrate and ingest Amazon EC2 Windows instances into an AWS Managed Services account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/393c21cb-b6c6-4446-b597-b62e29fdb7f8/images/0b2fa855-7460-49f8-9e7f-3485e6ce1745.png)


## Tools
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, and you can scale out or scale in.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Managed Services (AMS)](https://docs.aws.amazon.com/managedservices/?id=docs_gateway) helps you operate more efficiently and securely by providing ongoing management of your AWS infrastructure, including monitoring, incident management, security guidance, patch support, and backup for AWS workloads.

**Other services**
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.

## Epics
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-epics"></a>

### Configure settings on the instance
<a name="configure-settings-on-the-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Change the DNS Client settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 
| Change the Windows Update settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 
| Enable the firewall. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 

### Prepare the instance for AMS WIGS
<a name="prepare-the-instance-for-ams-wigs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up and prepare the instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 
| Repair the sppnp.dll file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 
| Run the pre-WIG validation script. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 
| Create the failsafe AMI. | After the pre-WIG validation passes, create a pre-ingestion AMI as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html)For more information, see [AMI \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-ami-create.html) in the AMS documentation. | Migration engineer | 

### Ingest and validate the instance
<a name="ingest-and-validate-the-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Submit the RFC to create the workload ingest stack. | Submit a request for change (RFC) to start the AMS WIGS. For instructions, see [Workload Ingest Stack: Creating](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-workload-ingest-col.html) in the AMS documentation. This starts the workload ingestion and installs all the software required by AMS, including backup tools, Amazon EC2 management software, and antivirus software. | Migration engineer | 
| Validate successful migration. | After the workload ingestion is complete, you can see the AMS-managed instance and AMS-ingested AMI.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 

### Launch the instance in the target AMS account
<a name="launch-the-instance-in-the-target-ams-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Submit the RFC to create an EC2 stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.html) | Migration engineer | 

## Related resources
<a name="ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account-resources"></a>

**AWS Prescriptive Guidance**
+ [Automate pre-workload ingestion activities for AWS Managed Services on Windows](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-pre-workload-ingestion-activities-for-aws-managed-services-on-windows.html)
+ [Automatically create an RFC in AMS using Python](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html?did=pg_card&trk=pg_card)

**AMS documentation**
+ [AMS Workload Ingest](https://docs.aws.amazon.com/managedservices/latest/appguide/ams-workload-ingest.html)
+ [How Migration Changes Your Resource](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-migrate-changes.html)
+ [Migrating Workloads: Standard Process](https://docs.aws.amazon.com/managedservices/latest/appguide/mp-migrate-stack-process.html)

**Marketing resources**
+ [AWS Managed Services](https://aws.amazon.com/managed-services/)
+ [AWS Managed Services FAQs](https://aws.amazon.com/managed-services/faqs/)
+ [AWS Managed Services Resources](https://aws.amazon.com/managed-services/resources/)
+ [AWS Managed Services Features](https://aws.amazon.com/managed-services/features/)

# Migrate a Couchbase Server database to Amazon EC2
<a name="migrate-couchbase-server-ec2"></a>

*Subhani Shaik, Amazon Web Services*

## Summary
<a name="migrate-couchbase-server-ec2-summary"></a>

This pattern describes how you can migrate Couchbase Server from an on-premises environment to Amazon Elastic Compute Cloud (Amazon EC2) on AWS.

Couchbase Server is a distributed NoSQL (JSON document) database that provides relational database capabilities. Migrating a Couchbase Server database to AWS can provide increased scalability, improved performance, cost efficiency, enhanced security, simplified management, and global reach, which can benefit applications that require high availability and low latency data access. You also gain access to advanced features through AWS managed services. 

Couchbase Server on AWS provides the following key features: 
+ Memory-first architecture
+ High availability, disaster recovery, and load balancing
+ Multi-master, multi-Region deployment for optimal performance

For more information about key benefits, see the [Additional information](#migrate-couchbase-server-ec2-additional) section and the [Couchbase website](https://www.couchbase.com/partners/amazon/).

## Prerequisites and limitations
<a name="migrate-couchbase-server-ec2-prereqs"></a>

**Prerequisites**
+ An active AWS account with a virtual private cloud (VPC), two Availability Zones, private subnets, and a security group. For instructions, see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) in the Amazon Virtual Private Cloud (Amazon VPC) documentation.
+ Connectivity enabled between source and target environments. For information about the TCX ports used by Couchbase Server, see the [Couchbase documentation](https://docs.couchbase.com/server/current/install/install-ports.html).

## Architecture
<a name="migrate-couchbase-server-ec2-architecture"></a>

The following diagram shows the high-level architecture for migrating Couchbase Server to AWS.

![\[Migration architecture for rehosting Couchbase Server on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4cedced2-3528-4f12-b19e-7d389e820cc1/images/ac22133a-895f-4999-b1e1-57f69e83a326.png)


From the on-premises Couchbase cluster, data moves through a customer gateway by using [AWS Direct Connect](https://aws.amazon.com/directconnect/). The data passes through a router and an Direct Connect route and reaches the VPC through an [AWS Virtual Private Network (Site-to-Site VPN)](https://aws.amazon.com/vpn/) gateway. The VPC contains an EC2 instance that is running Couchbase Server. The AWS infrastructure also includes [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) for access control, [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/) for data encryption, [Amazon Elastic Block Store (Amazon EBS)](https://aws.amazon.com/ebs/) for block storage, and [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) for data storage.

## Tools
<a name="migrate-couchbase-server-ec2-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.

## Best practices
<a name="migrate-couchbase-server-ec2-best-practices"></a>
+ [Installing and configuring Couchbase](https://docs.couchbase.com/server/current/install/install-intro.html) on different operating platforms
+ [Best practices](https://docs.couchbase.com/server/current/cloud/couchbase-cloud-deployment.html#aws-best-practices) for deploying Couchbase Server on AWS
+ [Creating a Couchbase cluster](https://docs.couchbase.com/server/current/manage/manage-nodes/create-cluster.html)
+ [Performance best practices](https://docs.couchbase.com/dotnet-sdk/current/project-docs/performance.html) for Couchbase applications
+ [Security best practices](https://docs.couchbase.com/server/current/learn/security/security-overview.html) for Couchbase Server
+ [Storage best practices](https://www.couchbase.com/forums/t/what-is-the-best-document-storage-strategy-in-couchbase/1573) for Couchbase Server databases

## Epics
<a name="migrate-couchbase-server-ec2-epics"></a>

### Deploy an Amazon EC2 instance for Couchbase Server
<a name="deploy-an-ec2-instance-for-couchbase-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open the Amazon EC2 console. | Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/). | DevOps engineer, Couchbase administrator | 
| Deploy an Amazon EC2 instance. | Launch an EC2 instance that matches the on-premises Couchbase Server configurations. For more information about how to deploy an EC2 instance, see [Launch an Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html) in the Amazon EC2 documentation. | DevOps engineer, Couchbase administrator | 

### Install and configure Couchbase Server on Amazon EC2
<a name="install-and-configure-couchbase-server-on-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install a Couchbase cluster. | Review [Couchbase Server deployment guidelines](https://docs.couchbase.com/server/current/install/install-production-deployment.html) before you install Couchbase Server on Amazon EC2.To install Couchbase Server, see the [Couchbase Server documentation](https://docs.couchbase.com/server/current/install/install-intro.html) | Couchbase administrator | 
| Configure the cluster. | To configure the cluster, see [Cluster Configuration Options](https://docs.couchbase.com/cloud/clusters/databases.html#cluster-configuration-options) in the Couchbase documentation. | Couchbase administrator | 

### Add a new node and rebalance the Couchbase cluster
<a name="add-a-new-node-and-rebalance-the-couchbase-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add a node for the EC2 instance. | Add the newly deployed EC2 instance that has Couchbase installed to the existing on-premises cluster. For instructions, see [Add a Node and Rebalance](https://docs.couchbase.com/server/current/manage/manage-nodes/add-node-and-rebalance.html) in the Couchbase Server documentation. | Couchbase administrator | 
| Rebalance the cluster. | The rebalancing process makes the newly added node with the EC2 instance an active member of the Couchbase cluster. For instructions, see [Add a Node and Rebalance](https://docs.couchbase.com/server/current/manage/manage-nodes/add-node-and-rebalance.html) in the Couchbase Server documentation | Couchbase administrator | 

### Reconfigure connections
<a name="reconfigure-connections"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the on-premises nodes and rebalance. | You can now remove the on-premises nodes from the cluster. After removing the nodes, follow the rebalance process to redistribute data, indexes, event processing, and query processing among available nodes in the cluster. For instructions, see [Remove a Node and Rebalance](https://docs.couchbase.com/server/current/manage/manage-nodes/remove-node-and-rebalance.html) in the Couchbase Server documentation. | Couchbase administrator | 
| Update connection parameters. | Update your application's connection parameters to use the new Amazon EC2 IP address, so your application can connect to the new node. | Couchbase application developer | 

## Related resources
<a name="migrate-couchbase-server-ec2-resources"></a>
+ [Couchbase Server Services](https://docs.couchbase.com/server/current/learn/services-and-indexes/services/services.html)
+ [Deploy Couchbase Server Using AWS Marketplace](https://docs.couchbase.com/server/current/cloud/couchbase-aws-marketplace.html)
+ [Connect to Couchbase Server](https://docs.couchbase.com/server/current/guides/connect.html)
+ [Manage Buckets](https://docs.couchbase.com/server/current/manage/manage-buckets/bucket-management-overview.html)
+ [Cross Data Center Replication (XDCR)](https://docs.couchbase.com/server/current/learn/clusters-and-availability/xdcr-overview.html)
+ [Couchbase Inc. License Agreement](https://www.couchbase.com/LA20190115/)

## Additional information
<a name="migrate-couchbase-server-ec2-additional"></a>

**Key benefits**

Migrating your Couchbase database to AWS provides the following advantages:

**Scalability**. You can scale your Couchbase cluster up or down based on demand without having to manage physical hardware, so you can easily accommodate fluctuating data volumes and application usage. AWS provides:
+ Vertical and horizontal scaling options
+ [Global deployment](https://aws.amazon.com/about-aws/global-infrastructure/) capabilities
+ Load balancing across AWS Regions
+ [Database scaling solutions](https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertically-and-horizontally/)
+ [Content delivery](https://aws.amazon.com/solutions/content-delivery/) optimization

**Performance optimization**. AWS provides a high-performance network infrastructure and [optimized instance types](https://aws.amazon.com/ec2/instance-types/) to ensure fast data access and low latency for your Couchbase database.
+ [High performance computing (HPC)](https://aws.amazon.com/hpc/) options
+ Global content delivery through [Amazon CloudFront](https://aws.amazon.com/cloudfront/)
+ Multiple [storage options](https://aws.amazon.com/products/storage/)
+ Advanced [database services](https://aws.amazon.com/products/databases/), including Amazon Relational Database Service (Amazon RDS) and Amazon DynamoDB
+ Low-latency connections with [Direct Connect](https://aws.amazon.com/directconnect/)

**Cost optimization**. Select the appropriate instance type and configuration to balance performance and cost based on your workload. Pay only for the resources you use. This can potentially reduce your operational costs by eliminating the need to manage on-premises hardware and taking advantage of AWS Cloud economies of scale.
+ [Reserved instances ](https://aws.amazon.com/ec2/pricing/reserved-instances/)can help you plan ahead and reduce your costs substantially when you use Couchbase on AWS.
+ [Automatic scaling](https://aws.amazon.com/autoscaling/) prevents over-provisioning and helps you optimize your utilization and cost efficiencies.

**Enhanced security**. Benefit from the robust security features on AWS, such as data encryption, access controls, and security groups to help protect the sensitive data you store in Couchbase. Additional benefits:
+ The [AWS Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/) clearly differentiates between the security *of* the cloud (AWS responsibility) and security *in* the cloud (customer responsibility).
+ [AWS compliance](https://aws.amazon.com/compliance/) supports major security standards.
+ AWS provides advanced [encryption](https://docs.aws.amazon.com/prescriptive-guidance/latest/encryption-best-practices/welcome.html) options.
+ [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) helps you manage secure access to your resources.

**Simplified management**. AWS provides managed services for Couchbase, so you can focus on application development instead of managing the underlying infrastructure.

**Global reach**. You can deploy your Couchbase cluster across multiple AWS Regions to achieve low latency for users around the world. You can deploy your databases entirely in the cloud or in a hybrid environment. You can safeguard your data with built-in enterprise-grade security and fast, efficient bidirectional synchronization of data from the edge to the cloud. At the same time, you can simplify development with a consistent programming model for building web and mobile apps.

**Business continuity**:
+ **Data backup and recovery**. In case of an issue, you can use [AWS Backup](https://aws.amazon.com/backup/) to ensure data resiliency and easy recovery. For disaster recovery options, see the [AWS Well-Architected Framework documentation](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html).
+ **Couchbase multi-Region deployment**: To deploy a Couchbase database in a multi-Region AWS environment, you can subscribe to Couchbase Server in [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-zy5g2wqmqdyzw), use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)templates to create separate Couchbase clusters in each Region, and then configure cross-Region replication to synchronize data across Regions. This configuration ensures high availability and geographic redundancy across multiple Regions. For more information, see [Deploy Couchbase Server Using AWS Marketplace](https://docs.couchbase.com/server/current/cloud/couchbase-aws-marketplace.html) in the Couchbase documentation.

**Infrastructure agility**:
+ Rapid [resource provisioning](https://aws.amazon.com/products/management-and-governance/use-cases/provisioning-and-orchestration/) and deprovisioning
+ [Global infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) reach
+ [Automatic scaling ](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html)based on demand
+ [Infrastructure as Code (IaC)](https://aws.amazon.com/what-is/iac/) for consistent deployments
+ Multiple[ instance types ](https://aws.amazon.com/ec2/instance-types/)that are optimized for different workloads

**Innovation enablement**:
+ Access to the latest technologies, including [AI/ML](https://aws.amazon.com/ai/generative-ai/),[ IoT](https://aws.amazon.com/iot/), and [analytics](https://aws.amazon.com/big-data/datalakes-and-analytics/)
+ [Managed services](https://aws.amazon.com/blogs/architecture/reduce-operational-load-using-aws-managed-services-for-your-data-solutions/), which reduce operational overhead
+ [Modern application](https://aws.amazon.com/modern-apps/) development practices
+ [Serverless ](https://aws.amazon.com/serverless/)computing options

**Operational excellence**:
+ [Centralized monitoring and logging](https://docs.aws.amazon.com/prescriptive-guidance/latest/designing-control-tower-landing-zone/logging-monitoring.html)
+ [Automated resource management](https://aws.amazon.com/systems-manager/)
+ [Predictive maintenance](https://aws.amazon.com/what-is/predictive-maintenance/) capabilities
+ [Enhanced visibility](https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-cloudwatch-provides-centralized-visibility-telemetry-configurations/) into resource usage
+ [Streamlined deployment processes](https://aws.amazon.com/blogs/mt/streamline-change-processes-and-improve-governance-with-aws-well-architected/)

**Modernization opportunities**:
+ [Microservices](https://aws.amazon.com/microservices/) architecture
+ [DevOps](https://aws.amazon.com/devops/) practices implementation
+ [Cloud-native](https://aws.amazon.com/what-is/cloud-native/) application development
+ [Legacy application modernization](https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-modernizing-applications/welcome.html)

**Competitive advantages**:
+ [Faster time to market](https://aws.amazon.com/blogs/smb/accelerate-time-to-market-and-business-growth-with-an-automated-software-as-a-service-platform/)
+ Improved[ customer experience](https://aws.amazon.com/blogs/publicsector/improving-customer-experience-for-the-public-sector-using-aws-services/)
+ [Data-driven](https://aws.amazon.com/data/data-driven-decision-making/) decision-making
+ Enhanced [business intelligence](https://aws.amazon.com/what-is/business-intelligence/)

# Migrate Db2 for LUW to Amazon EC2 by using log shipping to reduce outage time
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time"></a>

*Feng Cai, Ambarish Satarkar, and Saurabh Sharma, Amazon Web Services*

## Summary
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-summary"></a>

When customers migrate their IBM Db2 for LUW (Linux, UNIX, and Windows) workloads to Amazon Web Services (AWS), using Amazon Elastic Compute Cloud (Amazon EC2) with the Bring Your Own License (BYOL) model is the fastest way. However, migrating large amounts of data from on-premises Db2 into AWS can be a challenge, especially when the outage window is short. Many customers try to set the outage window to less than 30 minutes, which leaves little time for the database itself.

This pattern covers how to accomplish a Db2 migration with a short outage window by using transaction log shipping. This approach applies to Db2 on a little-endian Linux platform.

## Prerequisites and limitations
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A Db2 instance running on EC2 instance that matches the on-premises file system layouts
+ An Amazon Simple Storage Service (Amazon S3) bucket accessible to the EC2 instance
+ An AWS Identity and Access Management (IAM) policy and role to make programmatic calls to Amazon S3
+ Synchronized time zone and system clocks on Amazon EC2 and the on-premises server
+ The on-premises network connected to AWS through [AWS Site-to-Site VPN](https://aws.amazon.com/vpn/) or [AWS Direct Connect](https://aws.amazon.com/directconnect/)

**Limitations**
+ The Db2 on-premises instance and Amazon EC2 must be on the same [platform family](https://www.ibm.com/docs/en/db2/11.1?topic=dbrs-backup-restore-operations-between-different-operating-systems-hardware-platforms).
+ The Db2 on-premises workload must be logged. To block any unlogged transaction, set `blocknonlogged=yes` in the database configuration.

**Product versions**
+ Db2 for LUW version 11.5.9 and later

## Architecture
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-architecture"></a>

**Source technology stack**
+ Db2 on Linux** **x86\$164

** Target technology stack**
+ Amazon EBS
+ Amazon EC2
+ AWS Identity and Access Management (IAM)
+ Amazon S3
+ AWS Site-to-Site VPN or Direct Connect

**Target architecture**

The following diagram shows one Db2 instance running on-premises with a virtual private network (VPN) connection to Db2 on Amazon EC2. The dotted lines represent the VPN tunnel between your data center and the AWS Cloud.

![\[Workflow to accomplish a Db2 migration within short outage window using transaction log shipping.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7dec6e4a-a92e-4204-9e42-f89d7dcafbfa/images/a7e1c1d6-2ec1-4271-952d-a58260ad7c81.png)


## Tools
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) helps you pass traffic between instances that you launch on AWS and your own remote network.

**Other tools**
+ [db2cli](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2cli-db2-interactive-cli) is the Db2 interactive CLI command.

## Best practices
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-best-practices"></a>
+ On the target database, use [gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) to access the database backup image and log files in Amazon S3.
+ On the source database, use [AWS PrivateLink for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html) to send the database backup image and log files to Amazon S3.

## Epics
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-epics"></a>

### Set environment variables
<a name="set-environment-variables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set environment variables. | This pattern uses the following names:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time.html)You can change them to fit your environment. | DBA | 

### Configure the on-premises Db2 server
<a name="configure-the-on-premises-db2-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS CLI. | To download and install the latest version of the AWS CLI, run the following commands:<pre>$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />unzip awscliv2.zip<br />sudo ./aws/install</pre> | Linux administrator | 
| Set up a local destination for Db2 archive logs. | To keep the target database on Amazon EC2 in sync with the on-premises source database, the latest transaction logs need be retrieved from the source.In this setup, `/db2logs` is set by `LOGARCHMETH2` on the source as a staging area. The archived logs in this directory will be synced into Amazon S3 and accessed by Db2 on Amazon EC2. The pattern uses `LOGARCHMETH2` because `LOGARCHMETH1` might have been configured to use a third-party vendor tool that AWS CLI command cannot access. To retrieve the logs, run the following command: <pre>db2 connect to sample<br />db2 update db cfg for SAMPLE using LOGARCHMETH2 disk:/db2logs</pre> | DBA | 
| Run an online database backup. | Run an online database backup, and save it to the local backup file system: <pre>db2 backup db sample online to /backup </pre> | DBA | 

### Set up the S3 bucket and IAM policy
<a name="set-up-the-s3-bucket-and-iam-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Create an S3 bucket for the on-premises server to send the backup Db2 images and log files to on AWS. The bucket will also be accessed by Amazon EC2:<pre>aws s3api create-bucket --bucket logshipmig-db2 --region us-east-1 </pre> | AWS systems administrator | 
|  Create an IAM policy. | The `db2bucket.json` file contains the IAM policy to access the Amazon S3 bucket:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "kms:GenerateDataKey",<br />                "kms:Decrypt",<br />                "s3:PutObject",<br />                "s3:GetObject",<br />                "s3:AbortMultipartUpload",<br />                "s3:ListBucket",<br />                "s3:DeleteObject",<br />                "s3:GetObjectVersion",<br />                "s3:ListMultipartUploadParts"<br />            ],<br />            "Resource": [<br />                "arn:aws:s3:::logshipmig-db2/*",<br />                "arn:aws:s3:::logshipmig-db2"<br />            ]<br />        }<br />    ]<br />}</pre>To create the policy, use the following AWS CLI command:<pre>aws iam create-policy \<br />      --policy-name db2s3policy \<br />      --policy-document file://db2bucket.json </pre> The JSON output shows the Amazon Resource Name (ARN) for the policy, where `aws_account_id` represents your account ID:<pre>"Arn": "arn:aws:iam::aws_account_id:policy/db2s3policy"</pre> | AWS administrator, AWS systems administrator | 
| Attach the IAM policy to the IAM role used by the EC2 instance. | In most AWS environments, a running EC2 instance has an IAM Role set by your systems administrator. If the IAM role is not set, create the role and choose **Modify IAM role** on the EC2 console to associate the role with the EC2 instance that hosts the Db2 database. Attach the IAM policy to the IAM role with the policy ARN:<pre>aws iam attach-role-policy \<br />    --policy-arn "arn:aws:iam::aws_account_id:policy/db2s3policy"  \<br />    --role-name db2s3role  </pre>After the policy is attached, any EC2 instance associated with the IAM role can access the S3 bucket. | AWS administrator, AWS systems administrator | 

### Send the source database backup image and log files to Amazon S3
<a name="send-the-source-database-backup-image-and-log-files-to-amazon-s3"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the AWS CLI on the on-premises Db2 server. | Configure the AWS CLI with the `Access Key ID` and `Secret Access Key` generated in the earlier step:<pre>$ aws configure <br />AWS Access Key ID [None]: *************<br />AWS Secret Access Key [None]: ***************************<br />Default region name [None]: us-east-1<br />Default output format [None]: json</pre>  | AWS administrator, AWS systems administrator | 
| Send the backup image to Amazon S3. | Earlier, an online database backup was saved to the `/backup` local directory. To send that backup image to the S3 bucket, run the following command:<pre>aws s3 sync /backup s3://logshipmig-db2/SAMPLE_backup</pre> | AWS administrator, Migration engineer | 
| Send the Db2 archive logs to Amazon S3. | Sync the on-premises Db2 archive logs with the S3 bucket that can be accessed by the target Db2 instance on Amazon EC2:<pre>aws s3 sync /db2logs s3://logshipmig-db2/SAMPLE_LOG</pre>Run this command periodically by using cron or other scheduling tools. The frequency depends on how often the source database archives transaction log files.  | AWS administrator, Migration engineer | 

### Connect Db2 on Amazon EC2 to Amazon S3 and start the database sync
<a name="connect-db2-on-amazon-ec2-to-amazon-s3-and-start-the-database-sync"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a PKCS12 keystore. | Db2 uses a Public-Key Cryptography Standards (PKCS) encryption keystore to keep the AWS access key secure. Create a keystore and configure the source Db2 instance to use it:<pre>gsk8capicmd_64 -keydb -create -db "/home/db2inst1/.keystore/db2s3.p12" -pw "<password>" -type pkcs12 -stash <br /> <br />db2 "update dbm cfg using keystore_location /home/db2inst1/.keystore/db2s3.p12 keystore_type pkcs12"</pre> | DBA | 
| Create the Db2 storage access alias. | To create the [storage access alias](https://www.ibm.com/docs/en/db2/11.5?topic=commands-catalog-storage-access), use the following script syntax:`db2 "catalog storage access alias <alias_name> vendor S3 server <S3 endpoint> container '<bucket_name>'"`For example, your script might look like the following: `db2 "catalog storage access alias DB2AWSS3 vendor S3 server s3.us-east-1.amazonaws.com container 'logshipmig-db2'" ` | DBA | 
| Set the staging area. | By default, Db2 uses `DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH` as the staging area to upload and download files to and from Amazon S3. The default path is `sqllib/tmp/RemoteStorage.xxxx` under the instance home directory, with `xxxx` referring to the Db2 partition number. Note that the staging area must have enough capacity to hold the backup images and log files. You can use the registry to point the staging area into a different directory.We also recommend using `DB2_ENABLE_COS_SDK=ON`, `DB2_OBJECT_STORAGE_SETTINGS=EnableStreamingRestore`, and the link to the `awssdk` library to bypass the Amazon S3 staging area for database backup and restore:<pre>#By root:<br />cp -rp /home/db2inst1/sqllib/lib64/awssdk/RHEL/7.6/* /home/db2inst1/sqllib/lib64/<br /><br />#By db2 instance owner:<br />db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/db2stage<br />db2set DB2_ENABLE_COS_SDK=ON<br />Db2set DB2_OBJECT_STORAGE_SETTINGS=EnableStreamingRestore<br />db2stop<br />db2start</pre> | DBA | 
| Restore the database from the backup image. | Restore the target database on Amazon EC2 from the backup image in the S3 bucket:<pre>db2 restore db sample from DB2REMOTE://DB2AWSS3/logshipmig-db2/SAMPLE_backup replace existing</pre> | DBA | 
| Roll forward the database. | After the restore is complete, the target database will be put into rollforward pending state. Configure `LOGARCHMETH1` and `LOGARCHMETH2` so that Db2 knows where to get the transaction log files:<pre>db2 update db cfg for SAMPLE using LOGARCHMETH1 'DB2REMOTE://DB2AWSS3//SAMPLE_LOGS/'<br />db2 update db cfg for SAMPLE using LOGARCHMETH2 OFF</pre>Start database rollforward:<pre>db2 ROLLFORWARD DATABASE sample to END OF LOGS</pre>This command processes all log files that have been transferred to the S3 bucket. Run it periodically based on the frequency of the `s3 sync` command on the on-premises Db2 servers. For example, if `s3 sync` runs at each hour, and it takes 10 minutes to sync all the log files, set the command to run at 10 minutes after each hour.  | DBA | 

### Bring Db2 on Amazon EC2 online during the cutover window
<a name="bring-db2-on-amazon-ec2-online-during-the-cutover-window"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bring the target database online. | During the cutover window, do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time.html)After the last transaction log is synced into Amazon S3, run the `ROLLFORWARD` command for the final time:<pre>db2 rollforward DB sample to END OF LOGS<br />db2 rollforward DB sample complete<br /><br />                                 Rollforward Status<br />....<br /> Rollforward status                     = not pending<br />....<br />DB20000I  The ROLLFORWARD command completed successfully.<br /><br />db2 activate db sample<br />DB20000I  The ACTIVATE DATABASE command completed successfully.</pre>Bring the target database online, and point the application connections to Db2 on Amazon EC2. | DBA | 

## Troubleshooting
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| If multiple databases have the same instance name and database name on different hosts (DEV, QA, PROD), backups and logs might go to the same subdirectory. | Use different S3 buckets for DEV, QA, and PROD, and add the hostname as subdirectory prefix to avoid confusion. | 
| If there are multiple backup images in the same location, you will get the following error when you restore:`SQL2522N More than one backup file matches the time stamp value provided for the backed up database image.` | In the `restore` command, add the timestamp of the backup:`db2 restore db sample from DB2REMOTE://DB2AWSS3/logshipmig-db2/SAMPLE_backup taken at 20230628164042 replace existing` | 

## Related resources
<a name="migrate-db2-for-luw-to-amazon-ec2-by-using-log-shipping-to-reduce-outage-time-resources"></a>
+ [Db2 backup and restore operations between different operating systems and hardware platforms](https://www.ibm.com/docs/en/db2/11.5?topic=dbrs-backup-restore-operations-between-different-operating-systems-hardware-platforms)
+ [Set up Db2 STORAGE ACCESS ALIAS and DB2REMOTE](https://www.ibm.com/docs/en/db2/11.5?topic=commands-catalog-storage-access)
+ [Db2 ROLLFORWARD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-rollforward-database)
+ [Db2 secondary log archive method](https://www.ibm.com/docs/en/db2/11.5?topic=parameters-logarchmeth2-secondary-log-archive-method)

# Migrate Db2 for LUW to Amazon EC2 with high availability disaster recovery
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery"></a>

*Feng Cai, Aruna Gangireddy, and Venkatesan Govindan, Amazon Web Services*

## Summary
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-summary"></a>

When customers migrate their IBM Db2 LUW (Linux, UNIX, and Windows) workload to Amazon Web Services (AWS), using Amazon Elastic Compute Cloud (Amazon EC2) with the Bring Your Own License (BYOL) model is the fastest way. However, migrating large amounts of data from on-premises Db2 into AWS can be a challenge, especially when the outage window is short. Many customers try to set the outage window to less than 30 minutes, which leaves little time for the database itself.

This pattern covers how to accomplish a Db2 migration with a short outage window by using Db2 high availability disaster recovery (HADR). This approach applies to Db2 databases that are on the little-endian Linux platform and are not using Data Partitioning Feature (DPF).

## Prerequisites and limitations
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A Db2 instance running on an Amazon EC2 instance that matches the on-premises file system layouts
+ An Amazon Simple Storage Service (Amazon S3) bucket accessible to the EC2 instance
+ An AWS Identity and Access Management (IAM) policy and role to make programmatic calls to Amazon S3
+ Synchronized time zone and system clocks on Amazon EC2 and the on-premises server
+ The on-premises network connected to AWS through [AWS Site-to-Site VPN](https://aws.amazon.com/vpn/) or [AWS Direct Connect](https://aws.amazon.com/directconnect/)
+ Communication between the on-premises server and Amazon EC2 on HADR ports

**Limitations **
+ The Db2 on-premises instance and Amazon EC2 must be on the same [platform family](https://www.ibm.com/docs/en/db2/11.1?topic=dbrs-backup-restore-operations-between-different-operating-systems-hardware-platforms).
+ HADR is not supported in a partitioned database environment.
+ HADR doesn’t support the use of raw I/O (direct disk access) for database log files.
+ HADR doesn’t support infinite logging.
+ `LOGINDEXBUILD` must be set to `YES`, which will increase the log usage for rebuilding the index.
+ The Db2 on-premises workload must be logged. Set `blocknonlogged=yes` in the database configuration to block any unlogged transactions.

**Product versions**
+ Db2 for LUW version 11.5.9 and later

## Architecture
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-architecture"></a>

**Source technology stack**
+ Db2 on Linux** **x86\$164

**Target technology stack**
+ Amazon EC2
+ AWS Identity and Access Management (IAM)
+ Amazon S3
+ AWS Site-to-Site VPN

**Target architecture**

In the following diagram, Db2 on premises is running on `db2-server1` as the primary. It has two HADR standby targets. One standby target is on premises and is optional. The other standby target, `db2-ec2`, is on Amazon EC2. After the database is cut over to AWS, `db2-ec2` becomes the primary.

![\[Workflow to migrate with a short outage window an on-premises Db2 by using Db2 HADR.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2db43e4b-f0ea-4a92-96da-4cafb7d3368b/images/5295420e-3cd8-4127-9a18-ade971c36339.png)


1. Logs are streamed from the primary on-premises database to the standby on-premises database.

1. Using Db2 HADR, logs are streamed from the primary on-premises database through Site-to-Site VPN to Db2 on Amazon EC2.

1. Db2 backup and archive logs are sent from the primary on-premises database to the S3 bucket on AWS.

## Tools
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) helps you pass traffic between instances that you launch on AWS and your own remote network.

**Other tools**
+ [db2cli](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2cli-db2-interactive-cli) is the Db2 interactive CLI command.

## Best practices
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-best-practices"></a>
+ On the target database, use [gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) to access the database backup image and log files in Amazon S3.
+ On the source database, use [AWS PrivateLink for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html) to send the database backup image and log files to Amazon S3.

## Epics
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-epics"></a>

### Set environment variables
<a name="set-environment-variables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set environment variables. | This pattern uses the following names and ports:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery.html)You can change them to fit your environment. | DBA | 

### Configure the on-premises Db2 server
<a name="configure-the-on-premises-db2-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS CLI. | To download and install the latest version of AWS CLI, run the following commands:<pre>$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />unzip awscliv2.zip<br />sudo ./aws/install</pre> | Linux administrator | 
| Set up a local destination for Db2 archive logs. | Conditions such as heavy update batch jobs and network slowdowns can cause the HADR standby server to have a lag. To catch up, the standby server needs the transaction logs from the primary server. The sequence of places to request logs is the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery.html)In this setup, `/db2logs` is set by `LOGARCHMETH2` on the source as a staging area. The archived logs in this directory will be synced into Amazon S3 and accessed by Db2 on Amazon EC2. The pattern uses `LOGARCHMETH2` because `LOGARCHMETH1` might have been configured to use a third-party vendor tool that the AWS CLI command cannot access:<pre>db2 connect to sample<br />db2 update db cfg for SAMPLE using LOGARCHMETH2 disk:/db2logs</pre> | DBA | 
| Run an online database backup. | Run an online database backup, and save it to the local backup file system:<pre>db2 backup db sample online to /backup </pre> | DBA | 

### Set up the S3 bucket and IAM policy
<a name="set-up-the-s3-bucket-and-iam-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Create an S3 bucket for the on-premises server to send the backup Db2 images and log files to on AWS. The bucket will be accessed by Amazon EC2:<pre>aws s3api create-bucket --bucket hadrmig-db2 --region us-east-1 </pre> | AWS administrator | 
| Create an IAM policy. | The `db2bucket.json` file contains the IAM policy for accessing the S3 bucket:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "kms:GenerateDataKey",<br />                "kms:Decrypt",<br />                "s3:PutObject",<br />                "s3:GetObject",<br />                "s3:AbortMultipartUpload",<br />                "s3:ListBucket",<br />                "s3:DeleteObject",<br />                "s3:GetObjectVersion",<br />                "s3:ListMultipartUploadParts"<br />            ],<br />            "Resource": [<br />                "arn:aws:s3:::hadrmig-db2/*",<br />                "arn:aws:s3:::hadrmig-db2"<br />            ]<br />        }<br />    ]<br />}</pre>To create the policy, use the following AWS CLI command:<pre>aws iam create-policy \<br />      --policy-name db2s3hapolicy \<br />      --policy-document file://db2bucket.json </pre>The JSON output shows the Amazon Resource Name (ARN) for the policy, where `aws_account_id` represents your account ID:<pre>"Arn": "arn:aws:iam::aws_account_id:policy/db2s3hapolicy"</pre> | AWS administrator, AWS systems administrator | 
| Attach the IAM policy to the IAM role. | Usually, the EC2 instance with Db2 running would have an IAM role assigned by the systems administrator. If no IAM role is assigned, you can choose **Modify IAM role** on the Amazon EC2 console.Attach the IAM policy to the IAM role associated with the EC2 instance. After the policy is attached, the EC2 instance can access the S3 bucket:<pre>aws iam attach-role-policy --policy-arn "arn:aws:iam::aws_account_id:policy/db2s3hapolicy" --role-name db2s3harole   </pre> |  | 

### Send the source database backup image and log files to Amazon S3
<a name="send-the-source-database-backup-image-and-log-files-to-amazon-s3"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS CLI on the on-premises Db2 server. | Configure AWS CLI with the `Access Key ID` and `Secret Access Key` that you generated earlier:<pre>$ aws configure <br />AWS Access Key ID [None]: *************<br />AWS Secret Access Key [None]: ***************************<br />Default region name [None]: us-east-1<br />Default output format [None]: json</pre> | AWS administrator, AWS systems administrator | 
| Send the backup image to Amazon S3. | Earlier, an online database backup was saved to the `/backup` local directory. To send that backup image to the S3 bucket, run the following command:<pre>aws s3 sync /backup s3://hadrmig-db2/SAMPLE_backup</pre> | AWS administrator, AWS systems administrator | 
| Send the Db2 archive logs to Amazon S3. | Sync the on-premises Db2 archive logs with the Amazon S3 bucket that can be accessed by the target Db2 instance on Amazon EC2:<pre>aws s3 sync /db2logs s3://hadrmig-db2/SAMPLE_LOGS</pre>Run this command periodically by using cron or other scheduling tools. The frequency depends on how often the source database archives transaction log files. |  | 

### Connect Db2 on Amazon EC2 to Amazon S3 and start the initial database sync
<a name="connect-db2-on-amazon-ec2-to-amazon-s3-and-start-the-initial-database-sync"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a PKCS12 keystore. | Db2 uses a Public-Key Cryptography Standards (PKCS) encryption keystore to keep the AWS access key secure. Create a keystore, and configure the source Db2 to use it:<pre>gsk8capicmd_64 -keydb -create -db "/home/db2inst1/.keystore/db2s3.p12" -pw "<password>" -type pkcs12 -stash <br /> <br />db2 "update dbm cfg using keystore_location /home/db2inst1/.keystore/db2s3.p12 keystore_type pkcs12"</pre> | DBA | 
| Create the Db2 storage access alias. | Db2 uses a storage access alias to access Amazon S3 directly with the `INGEST`, `LOAD`, `BACKUP DATABASE`, or `RESTORE DATABASE` commands. Because you assigned an IAM role to the EC2 instance, `USER` and `PASSWORD` are not required:`db2 "catalog storage access alias <alias_name> vendor S3 server <S3 endpoint> container '<bucket_name>'"`For example, your script might look like the following: `db2 "catalog storage access alias DB2AWSS3 vendor S3 server s3.us-east-1.amazonaws.com container 'hadrmig-db2'" ` | DBA | 
| Set the staging area. | We recommend using `DB2_ENABLE_COS_SDK=ON`, `DB2_OBJECT_STORAGE_SETTINGS=EnableStreamingRestore`, and the link to the `awssdk` library to bypass the Amazon S3 staging area for database backup and restore:<pre>#By root:<br />cp -rp /home/db2inst1/sqllib/lib64/awssdk/RHEL/7.6/* /home/db2inst1/sqllib/lib64/<br /><br />#By db2 instance owner:<br />db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/db2stage<br />db2set DB2_ENABLE_COS_SDK=ON<br />db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/db2stage<br />db2stop<br />db2start</pre> | DBA | 
| Restore the database from the backup image. | Restore the target database on Amazon EC2 from the backup image in the S3 bucket:<pre>db2 create db sample on /data1<br />db2 restore db sample from DB2REMOTE://DB2AWSS3/hadrmig-db2/SAMPLE_backup replace existing</pre> | DBA | 

### Set up HADR with no HADR on premises
<a name="set-up-hadr-with-no-hadr-on-premises"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the on-premises Db2 server as the primary. | Update the database configuration settings for HADR on `db2-server1` (the on-premises source) as the primary. Set `HADR_SYNCMODE` to `SUPERASYNC` mode, which has the shortest transaction response time:`db2 update db cfg for sample using HADR_LOCAL_HOST db2-server1 HADR_LOCAL_SVC 50010 HADR_REMOTE_HOST db2-ec2 HADR_REMOTE_SVC 50012 HADR_REMOTE_INST db2inst1 HADR_SYNCMODE SUPERASYNC DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully`Some network delays between the on-premises data center and AWS are expected. (You can set a different `HADR_SYNCMODE` value based on network reliability. For more information, see the [Related resources](#migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-resources) section). | DBA | 
| Change the target database log archive destination. | Change the target database log archive destination to match the Amazon EC2 environment:<pre>db2 update db cfg for SAMPLE using LOGARCHMETH1 'DB2REMOTE://DB2AWSS3//SAMPLE_LOGS/' LOGARCHMETH2 OFF<br />DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully</pre> | DBA | 
| Configure HADR for Db2 on the Amazon EC2 server. | Update database configuration for HADR on `db2-ec2` as standby:`db2 update db cfg for sample using HADR_LOCAL_HOST db2-ec2 HADR_LOCAL_SVC 50012 HADR_REMOTE_HOST db2-server1 HADR_REMOTE_SVC 50010 HADR_REMOTE_INST db2inst1 HADR_SYNCMODE SUPERASYNC DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully` | DBA | 
| Verify HADR setup. | Verify the HADR parameters on the source and target Db2 servers.To verify the setup on `db2-server1`, run the following command:<pre>db2 get db cfg for sample|grep HADR<br /> HADR database role                                      = PRIMARY<br /> HADR local host name                  (HADR_LOCAL_HOST) = db2-server1<br /> HADR local service name                (HADR_LOCAL_SVC) = 50010<br /> HADR remote host name                (HADR_REMOTE_HOST) = db2-ec2<br /> HADR remote service name              (HADR_REMOTE_SVC) = 50012<br /> HADR instance name of remote server  (HADR_REMOTE_INST) = db2inst1<br /> HADR timeout value                       (HADR_TIMEOUT) = 120<br /> HADR target list                     (HADR_TARGET_LIST) = <br /> HADR log write synchronization mode     (HADR_SYNCMODE) = NEARSYNC<br /> HADR spool log data limit (4KB)      (HADR_SPOOL_LIMIT) = AUTOMATIC(52000)<br /> HADR log replay delay (seconds)     (HADR_REPLAY_DELAY) = 0<br /> HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0<br /> HADR SSL certificate label             (HADR_SSL_LABEL) =<br /> HADR SSL Hostname Validation        (HADR_SSL_HOST_VAL) = OFF</pre> To verify the setup on `db2-ec2`, run the following command:<pre>db2 get db cfg for sample|grep HADR<br /> HADR database role                                      = STANDBY<br /> HADR local host name                  (HADR_LOCAL_HOST) = db2-ec2<br /> HADR local service name                (HADR_LOCAL_SVC) = 50012<br /> HADR remote host name                (HADR_REMOTE_HOST) = db2-server1<br /> HADR remote service name              (HADR_REMOTE_SVC) = 50010<br /> HADR instance name of remote server  (HADR_REMOTE_INST) = db2inst1<br /> HADR timeout value                       (HADR_TIMEOUT) = 120<br /> HADR target list                     (HADR_TARGET_LIST) = <br /> HADR log write synchronization mode     (HADR_SYNCMODE) = SUPERASYNC<br /> HADR spool log data limit (4KB)      (HADR_SPOOL_LIMIT) = AUTOMATIC(52000)<br /> HADR log replay delay (seconds)     (HADR_REPLAY_DELAY) = 0<br /> HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0<br /> HADR SSL certificate label             (HADR_SSL_LABEL) =<br /> HADR SSL Hostname Validation        (HADR_SSL_HOST_VAL) = OFF</pre>The `HADR_LOCAL_HOST`, `HADR_LOCAL_SVC`, `HADR_REMOTE_HOST`, and `HADR_REMOTE_SVC` parameters indicate the one primary and one standby HADR setup. | DBA | 
| Start up the Db2 HADR instance. | Start the Db2 HADR instance on the standby server `db2-ec2` first:<pre>db2 start hadr on db sample as standby<br />DB20000I  The START HADR ON DATABASE command completed successfully.</pre>Start Db2 HADR on the primary (source) server `db2-server1`:<pre>db2 start hadr on db sample as primary<br />DB20000I  The START HADR ON DATABASE command completed successfully.</pre>The HADR connection between Db2 on premises and on Amazon EC2 has now been successfully established. The Db2 primary server `db2-server1` starts streaming transaction log records to `db2-ec2` in real time. | DBA | 

### Set up HADR when HADR exists on premises
<a name="set-up-hadr-when-hadr-exists-on-premises"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add Db2 on Amazon EC2 as an auxiliary standby. | If HADR is running on the on-premises Db2 instance, you can add Db2 on Amazon EC2 as an auxiliary standby using `HADR_TARGET_LIST` by running the following commands on `db2-ec2`:`db2 update db cfg for sample using HADR_LOCAL_HOST db2-ec2 HADR_LOCAL_SVC 50012 HADR_REMOTE_HOST db2-server1 HADR_REMOTE_SVC 50010 HADR_REMOTE_INST db2inst1 HADR_SYNCMODE SUPERASYNC DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. db2 update db cfg for sample using HADR_TARGET_LIST "db2-server1:50010\|db2-server2:50011" DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully.` | DBA | 
| Add the auxiliary standby information to the on-premises servers. | Update `HADR_TARGET_LIST` on the two on-premises servers (primary and standby).On `db2-server1`, run the following code:`db2 update db cfg for sample using HADR_TARGET_LIST "db2-server2:50011\|db2-ec2:50012" DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. SQL1363W One or more of the parameters submitted for immediate modification were not changed dynamically. For these configuration parameters, the database must be shutdown and reactivated before the configuration parameter changes become effective.`On `db2-server2`, run the following code:`db2 update db cfg for sample using HADR_TARGET_LIST "db2-server1:50010\|db2-ec2:50012" DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. SQL1363W One or more of the parameters submitted for immediate modification were not changed dynamically. For these configuration parameters, the database must be shutdown and reactivated before the configuration parameter changes become effective.` | DBA | 
| Verify HADR setup. | Verify the HADR parameters on the source and target Db2 servers.On `db2-server1`, run the following code:<pre>db2 get db cfg for sample|grep HADR<br /> HADR database role                                      = PRIMARY<br /> HADR local host name                  (HADR_LOCAL_HOST) = db2-server1<br /> HADR local service name                (HADR_LOCAL_SVC) = 50010<br /> HADR remote host name                (HADR_REMOTE_HOST) = db2-server2<br /> HADR remote service name              (HADR_REMOTE_SVC) = 50011<br /> HADR instance name of remote server  (HADR_REMOTE_INST) = db2inst1<br /> HADR timeout value                       (HADR_TIMEOUT) = 120<br /> HADR target list                     (HADR_TARGET_LIST) = db2-server2:50011|db2-ec2:50012<br /> HADR log write synchronization mode     (HADR_SYNCMODE) = NEARSYNC<br /> HADR spool log data limit (4KB)      (HADR_SPOOL_LIMIT) = AUTOMATIC(52000)<br /> HADR log replay delay (seconds)     (HADR_REPLAY_DELAY) = 0<br /> HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0<br /> HADR SSL certificate label             (HADR_SSL_LABEL) =<br /> HADR SSL Hostname Validation        (HADR_SSL_HOST_VAL) = OFF</pre>On `db2-server2`, run the following code:<pre>db2 get db cfg for sample|grep HADR<br /> HADR database role                                      = STANDBY<br /> HADR local host name                  (HADR_LOCAL_HOST) = db2-server2<br /> HADR local service name                (HADR_LOCAL_SVC) = 50011<br /> HADR remote host name                (HADR_REMOTE_HOST) = db2-server1<br /> HADR remote service name              (HADR_REMOTE_SVC) = 50010<br /> HADR instance name of remote server  (HADR_REMOTE_INST) = db2inst1<br /> HADR timeout value                       (HADR_TIMEOUT) = 120<br /> HADR target list                     (HADR_TARGET_LIST) = db2-server1:50010|db2-ec2:50012<br /> HADR log write synchronization mode     (HADR_SYNCMODE) = NEARSYNC<br /> HADR spool log data limit (4KB)      (HADR_SPOOL_LIMIT) = AUTOMATIC(52000)<br /> HADR log replay delay (seconds)     (HADR_REPLAY_DELAY) = 0<br /> HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0<br /> HADR SSL certificate label             (HADR_SSL_LABEL) =<br /> HADR SSL Hostname Validation        (HADR_SSL_HOST_VAL) = OFF</pre>On `db2-ec2`, run the following code:<pre>db2 get db cfg for sample|grep HADR<br /> HADR database role                                      = STANDBY<br /> HADR local host name                  (HADR_LOCAL_HOST) = db2-ec2<br /> HADR local service name                (HADR_LOCAL_SVC) = 50012<br /> HADR remote host name                (HADR_REMOTE_HOST) = db2-server1<br /> HADR remote service name              (HADR_REMOTE_SVC) = 50010<br /> HADR instance name of remote server  (HADR_REMOTE_INST) = db2inst1<br /> HADR timeout value                       (HADR_TIMEOUT) = 120<br /> HADR target list                     (HADR_TARGET_LIST) = db2-server1:50010|db2-server2:50011<br /> HADR log write synchronization mode     (HADR_SYNCMODE) = SUPERASYNC<br /> HADR spool log data limit (4KB)      (HADR_SPOOL_LIMIT) = AUTOMATIC(52000)<br /> HADR log replay delay (seconds)     (HADR_REPLAY_DELAY) = 0<br /> HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0<br /> HADR SSL certificate label             (HADR_SSL_LABEL) =<br /> HADR SSL Hostname Validation        (HADR_SSL_HOST_VAL) = OFF</pre>The `HADR_LOCAL_HOST`, `HADR_LOCAL_SVC`, `HADR_REMOTE_HOST`, `HADR_REMOTE_SVC`, and `HADR_TARGET_LIST` parameters indicate the one primary and two standby HADR setup. |  | 
| Stop and start Db2 HADR. | `HADR_TARGET_LIST` is now set up on all three servers. Each Db2 server is aware of the other two. Stop and restart HADR (brief outage) to take advantage of the new configuration.On `db2-server1`, run the following commands:<pre>db2 stop hadr on db sample<br />db2 deactivate db sample<br />db2 activate db sample</pre>On `db2-server2`, run the following commands:<pre>db2 deactivate db sample<br />db2 start hadr on db sample as standby<br />SQL1766W  The command completed successfully</pre>On `db2-ec2`, run the following commands:<pre>db2 start hadr on db sample as standby<br />SQL1766W  The command completed successfully</pre>On `db2-server1`, run the following commands:<pre>db2 start hadr on db sample as primary<br />SQL1766W  The command completed successfully</pre>The HADR connection between Db2 on premises and on Amazon EC2 is now successfully established. The Db2 primary server `db2-server1` starts streaming transaction log records to both `db2-server2` and `db2-ec2` in real time.  | DBA | 

### Make Db2 on Amazon EC2 as primary during the cutover window
<a name="make-db2-on-amazon-ec2-as-primary-during-the-cutover-window"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Make sure that there is no HADR lag on the standby server. | Check HADR status from the primary server `db2-server1`. Don’t be alarmed when `HADR_STATE` is in `REMOTE_CATCHUP` status, which is normal when `HADR_SYNCMODE` is set to `SUPERASYNC`. The `PRIMARY_LOG_TIME` and `STANDBY_REPLAY_LOG_TIME` show that they are in sync:<pre>db2pd -hadr -db sample<br />                            HADR_ROLE = PRIMARY<br />                          REPLAY_TYPE = PHYSICAL<br />                        HADR_SYNCMODE = SUPERASYNC<br />                           STANDBY_ID = 2<br />                        LOG_STREAM_ID = 0<br />                           HADR_STATE = REMOTE_CATCHUP<br />.....<br />                     PRIMARY_LOG_TIME = 10/26/2022 02:11:32.000000 (1666750292)<br />                     STANDBY_LOG_TIME = 10/26/2022 02:11:32.000000 (1666750292)<br />              STANDBY_REPLAY_LOG_TIME = 10/26/2022 02:11:32.000000 (1666750292)</pre> | DBA | 
| Run HADR takeover. | To complete the migration, make `db2-ec2` the primary database by running the HADR takeover command. Use the command `db2pd` to verify the `HADR_ROLE` value:<pre>db2 TAKEOVER HADR ON DATABASE sample<br />DB20000I  The TAKEOVER HADR ON DATABASE command completed successfully.<br /><br />db2pd -hadr -db sample<br />Database Member 0 -- Database SAMPLE -- Active -- Up 0 days 00:03:25 -- Date 2022-10-26-02.46.45.048988<br /><br />                            HADR_ROLE = PRIMARY<br />                          REPLAY_TYPE = PHYSICAL</pre>To complete the migration to AWS, point the application connections to Db2 on Amazon EC2. |  | 

## Troubleshooting
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| If you use NAT for firewall and security reasons, the host can have two IP addresses (one internal and one external), which can cause an HADR IP address check failure. The `START HADR ON DATABASE` command will return the following message:`HADR_LOCAL_HOST:HADR_LOCAL_SVC (-xx-xx-xx-xx.:50011 (xx.xx.xx.xx:50011)) on remote database is different from HADR_REMOTE_HOST:HADR_REMOTE_SVC (xx-xx-xx-xx.:50011 (x.x.x.x:50011)) on local database.` | To [support HADR in a NAT environment,](https://www.ibm.com/docs/en/db2/11.5?topic=support-hadr-nat) you can configure the `HADR_LOCAL_HOST` with both the internal and external address. For example, if the Db2 server has the internal name `host1` and the external name `host1E`, `HADR_LOCAL_HOST` can be `HADR_LOCAL_HOST: "host1 \| host1E"`. | 

## Related resources
<a name="migrate-db2-for-luw-to-amazon-ec2-with-high-availability-disaster-recovery-resources"></a>
+ [Db2 backup and restore operations between different operating systems and hardware platforms](https://www.ibm.com/docs/en/db2/11.5?topic=dbrs-backup-restore-operations-between-different-operating-systems-hardware-platforms)
+ [Set up Db2 STORAGE ACCESS ALIAS and DB2REMOTE](https://www.ibm.com/docs/en/db2/11.5?topic=commands-catalog-storage-access)
+ [Db2 high availability disaster recovery](https://www.ibm.com/docs/en/db2/11.5?topic=server-high-availability-disaster-recovery-hadr)
+ [hadr\$1syncmode - HADR synchronization mode for log writes in peer state configuration parameter](https://www.ibm.com/docs/en/db2/11.5?topic=dcp-hadr-syncmode-hadr-synchronization-mode-log-writes-in-peer-state)

# Migrate IIS-hosted applications to Amazon EC2 by using appcmd.exe
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd"></a>

*Deepak Kumar, Amazon Web Services*

## Summary
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-summary"></a>

When you migrate Internet Information Services (IIS)-hosted applications to Amazon Elastic Compute Cloud (Amazon EC2) instances, you need to address several authentication challenges. These challenges include re-entering domain credentials for application pool identities and potentially regenerating machine keys for proper website functionality. You can use AWS Directory Service to establish trust relationships with your on-premises Active Directory or create a new managed Active Directory in AWS. This pattern describes a clean migration approach that uses the backup and restore functionality of IIS on Amazon EC2 instances. The approach uses appcmd.exe to uninstall and reinstall IIS on the target EC2 instances, enabling successful migration of IIS-hosted websites, application pool identities, and machine keys. 

## Prerequisites and limitations
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-prereqs"></a>

**Prerequisites **
+ An active AWS account for the target server.
+ A functional source IIS server with websites hosted on it.
+ Understanding of IIS working principles such as administration and configuration.
+ System administrator access on both the source and target servers.
+ Completed migration of the source IIS server to the target AWS account. You can use migration tools such as AWS Application Migration Service, an Amazon Machine Image (AMI) snapshot-based approach, or other migration tools.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ IIS 8.5 or IIS 10.0

## Architecture
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-architecture"></a>

**Source technology stack  **
+ Windows Server with IIS 8.5 or IIS 10.0 installed

**Target technology stack  **
+ Windows Server with IIS 8.5 or IIS 10.0 installed
+ Application Migration Service

**Target architecture**

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow to migrate IIS-hosted applications to Amazon EC2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2f9f7757-b2bc-4077-b51a-700de521424c/images/36aa9b7a-d0aa-4fa4-be47-9fee43b53c22.png)


The solution includes the following steps:

1. [Install](https://docs.aws.amazon.com/mgn/latest/ug/agent-installation.html) and configure the AWS Replication Agent on the source IIS server in your corporate data center. This agent initiates the replication process and manages data transfer to AWS.

1. The AWS Replication Agent establishes a [secure connection ](https://docs.aws.amazon.com/mgn/latest/ug/Agent-Related-FAQ.html#How-Communication-Secured)to Application Migration Service and begins replicating the source server data, including IIS configurations, websites, and application files.

1. Application Migration Service launches EC2 instances in the application subnet with the replicated data. The target EC2 instance runs IIS and contains the migrated applications with their associated Amazon Elastic Block Store (Amazon EBS) volumes. After the initial replication, Application Migration Service continues to sync changes until you're [ready to cut over](https://docs.aws.amazon.com/mgn/latest/ug/migration-dashboard.html#ready-for-cutover1) to the new environment.

## Tools
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-tools"></a>

**AWS services**
+ [AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) helps you rehost (*lift and shift*) applications to the AWS Cloud without change and with minimal downtime.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/ebs/latest/userguide/what-is-ebs.html) provides block-level storage volumes for use with Amazon EC2 instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.

**Other tools**
+ [Internet Information Services (IIS)](https://www.iis.net/overview) for Windows Server is a web server with a scalable and open architecture for hosting anything on the Web. IIS provides a set of administration tools, including administration and command line tools (for example, appcmd.exe), managed code and scripting APIs, and Windows PowerShell support.

## Epics
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-epics"></a>

### Back up IIS at source prior to migration
<a name="back-up-iis-at-source-prior-to-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create backups of IIS-hosted websites, configuration key, and `WAS` key. | To create backups for IIS-hosted websites, the configuration key (`iisConfigurationKey`), and the `WAS` key (`iisWasKey`), use appcmd.exe on the source server. Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.html)To export the configuration key and the `WAS` key, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.html) | IIS Administrator | 

### Uninstall and reinstall IIS on the target server
<a name="uninstall-and-reinstall-iis-on-the-target-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Uninstall IIS on the target server. | To uninstall IIS on the target server, use the following steps: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.html) | IIS Administrator | 
| Install IIS on the target server. | To install IIS on the target server, use the following steps: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.html) | IIS Administrator | 

### Restore IIS websites and configuration from the backups
<a name="restore-iis-websites-and-configuration-from-the-backups"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Restore IIS websites and configuration. | To restore the IIS backups that you created from the source server on the target server, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.html) | IIS Administrator | 

## Related resources
<a name="migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd-resources"></a>

**AWS documentation**
+ [Installing the AWS Replication Agent](https://docs.aws.amazon.com/mgn/latest/ug/agent-installation.html) (AWS Application Migration Service documentation)

**AWS Prescriptive Guidance**
+ [Migrate an on-premises VM to Amazon EC2 by using AWS Application Migration Service](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.html)
+ [Using AMIs or Amazon EBS snapshots for backups](https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/ec2-backup.html#amis-snapshots)

**Microsoft resources**
+ [Application pool identities](https://learn.microsoft.com/en-us/troubleshoot/developer/webapps/iis/was-service-svchost-process-operation/understanding-identities#application-pool-identities)
+ [IIS documentation](https://learn.microsoft.com/en-us/iis/)
+ [IIS 8 appcmd.exe documentation](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj635852(v=ws.11))
+ [IIS 10 appcmd.exe documentation](https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-10/new-features-introduced-in-iis-10)
+ [Powerful Admin Tools](https://learn.microsoft.com/en-us/iis/overview/powerful-admin-tools)

# Migrate an on-premises Microsoft SQL Server database to Amazon EC2 using Application Migration Service
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn"></a>

*Senthil Ramasamy, Amazon Web Services*

## Summary
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-summary"></a>

This pattern describes the steps for migrating a Microsoft SQL Server database from an on-premises data center to an Amazon Elastic Compute Cloud (Amazon EC2) instance. It uses the AWS Application Migration Service (AWS MGN) to rehost your database using an automated lift-and-shift migration. AWS MGN performs block-level replication of your source database server.

## Prerequisites and limitations
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A source Microsoft SQL Server database in an on-premises data center

**Limitations**
+ Your network bandwidth may be limited between the on-premises data center and AWS.
+ AWS MGN is limited to databases that are hosted on standalone servers with dedicated storage. It doesn’t support migrating clustered database systems and database systems where the rate of change exceeds a network’s throughput.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas page](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/), and choose the link for the service.

**Product versions**
+ All versions of Microsoft SQL Server database
+ Windows and Linux operating systems that [support AWS MGN](https://docs.aws.amazon.com/mgn/latest/ug/Supported-Operating-Systems.html)

## Architecture
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-architecture"></a>

**Source technology stack**

An on-premises Microsoft SQL Server database

**Target technology stack**

A Microsoft SQL Server database on an Amazon EC2 instance

**Target architecture**

![\[Replicate data from an on-premises corporate data center to AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a459eaef-c256-4691-a7ec-2304f634228c/images/d8d6cee7-f42c-4686-bf92-6e6d39adfb17.png)


This architecture uses AWS MGN to replicate data from an on-premises corporate data center to AWS. The diagram shows the data replication process, API communications, and the test and cutover phases.

1. Data replication:
   + AWS MGN replicates data from the on-premises corporate data center to AWS and initiates ongoing replication of changes.
   + Replication servers in the staging subnet receive and process the data.

1. API communication:
   + Replication servers connect to AWS MGN, Amazon EC2, and Amazon Simple Storage Service (Amazon S3) API endpoints through TCP port 443.
   + AWS MGN manages the migration.
   + Amazon EC2 manages instance operations.

1. Test and cutover:
   + Test instances launch in the operational subnet using replicated data.
   + After successful testing, AWS MGN creates cutover instances for the final migration.

## Tools
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-tools"></a>
+ [AWS Application Migration Service (AWS MGN)](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) helps you rehost (*lift and shift*) applications to the AWS Cloud without change and with minimal downtime.
+ [Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

## Best practices
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-best-practices"></a>
+ Set up API regional endpoints for AWS MGN, Amazon EC2, and Amazon S3 in the virtual private cloud (VPC) to prohibit public access from the internet.
+ Set up AWS MGN launch settings to launch target database servers in a private subnet.
+ Allow only required ports in database security groups.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-epics"></a>

### Set up AWS MGN
<a name="set-up-aws-mgn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS MGN. | Search for the AWS Application Migration Service in the AWS Management Console, and initiate the setup process. This will create a replication template and redirect you to the MGN console **Source servers** page. As you configure the MGN service, choose a service role from the generated list. | DBA, Migration engineer | 
| Add source server. | Add details of your on-premises source database server, and then add the server. | DBA, Migration engineer | 
| Install the AWS MGN agent on the source server. | Download the AWS MGN agent installer to your local system, and transfer the installer to your source database server. To validate the installer hash, see Validating the downloaded [AWS Replication Agent installer for Windows 2012](https://docs.aws.amazon.com/mgn/latest/ug/windows-agent.html#installer-hash-table-2012). | DBA, Migration engineer | 

### Install AWS MGN agent on source machines
<a name="install-aws-mgn-agent-on-source-machines"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate client IAM credentials. | Before you install the AWS MGN agent, generate AWS credentials by creating a new IAM user with the appropriate permissions.For more information, see [AWS managed policies for AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/security-iam-awsmanpol.html) and [Generating the required AWS credentials](https://docs.aws.amazon.com/mgn/latest/ug/credentials.html). | DBA, Migration engineer | 
| Install the agent on the source server. | Install the agent on the source machine that hosts the Microsoft SQL Server database. For more information, see [Installing the AWS Replication Agent on Windows servers](https://docs.aws.amazon.com/mgn/latest/ug/windows-agent.html).Provide the following AWS credentials:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.html)Your unique AWS credentials enable the AWS MGN agent to authenticate and perform migration tasks. | App owner, DBA, Migration engineer | 
| Choose disks to replicate. | After entering your AWS credentials, the installer verifies that your server meets the minimum requirements for agent installation (for example, whether the server has enough disk space to install the AWS MGN agent). The installer displays the volume labels and storage details.To replicate your database using AWS MGN service, select the applicable disks on your source server. Enter the path of each disk, separated by commas. If you want to replicate all of the disks, leave the path blank. After you confirm the selected disks, the installation proceeds. | DBA, Migration engineer | 
| Monitor synchronization progress. | AWS Replication Agent initiates the synchronization process by first taking a snapshot of the selected disks and then replicating the data.You can monitor the synchronization progress from the **Source server** page in the AWS MGN console. For more information, see [Monitor the server in the migration lifecycle](https://docs.aws.amazon.com/mgn/latest/ug/migration-dashboard.html). | DBA, Migration engineer | 

### Replication using AWS MGN
<a name="replication-using-aws-mgn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Manage replication progress. | After you start the initial synchronization, your source server appears in the AWS MGN console, where you can manage and monitor the migration. The console displays an estimated time for complete replication, which is based on the total size of selected disks and available network bandwidth. | DBA, Migration engineer | 
| Verify the synchronization. | After the disks on the source server are fully synchronized, verify that all selected disks are listed as fully synced and no errors are reported in the console.The AWS MGN console will then automatically transition the migration lifecycle status to **Ready for testing**, indicating that the replicated environment in AWS is prepared for performance and functionality testing. | App owner, DBA, Migration engineer | 

### Test and cut over
<a name="test-and-cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure launch settings. | Choose the source server in the AWS MGN console, and update the launch settings for the target test instance. From the source **Server details** page, navigate to the **Launch settings** tab to configure the test instance.Choose a cost-effective instance type and Amazon Elastic Block Store (Amazon EBS) volume type, and then configure the security groups and network requirements. For more information, see [Launch settings](https://docs.aws.amazon.com/mgn/latest/ug/launch-settings.html). | DBA, Migration engineer | 
| Launch the target test instance. | Navigate to the AWS MGN console of your synchronized source machine, and launch a target test instance by choosing **Test and cut over** and then **Launch test instances**.This creates a launch job that deploys the test instance using your configured settings. The instance launches in the AWS Cloud and replicates your source database server's environment. Monitor the launch progress from the **Launch history** page, where you can track the instance creation and address any issues. | DBA, Migration engineer | 
| Validate the target test instance. | Validate the Amazon EC2 database server:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.html)Conduct validation tests to ensure the database functions as expected. | DBA, Migration engineer | 
| Rename the server. | AWS MGN migration involves a storage-level copy of your on-premises source server. Your SQL Server EC2 instance contains only the original source server's details in its binaries, so update the binary information to reflect the new server's name.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.html) | DBA, Migration engineer | 
| Launch the cutover instance. | In the AWS MGN console, on the **Source servers** page, confirm that the migration lifecycle status of the server is **Ready for cutover**. Configure the launch settings for the cutover instance, ensuring that the settings mirror your on-premises environment.Before initiating the cutover, shut down your on-premises database, which ensures the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.html)Initiate the cutover instance in the AWS MGN console. When the cutover instance is operational, log in to the instance and perform the following tests:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn.html) | App owner, DBA, Migration engineer, Migration lead | 

## Troubleshooting
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The initial synchronization fails at the authentication step. | This is a network connectivity issue. The replication server can’t connect to AWS MGN. | 

## Related resources
<a name="migrate-microsoft-sql-server-to-amazon-ec2-using-aws-mgn-resources"></a>

**AWS documentation**
+ [Getting started with AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/getting-started.html)
+ [Migrate an on-premises Microsoft SQL Server database to Amazon EC2](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2.html)
+ [What is Microsoft SQL Server on Amazon EC2?](https://docs.aws.amazon.com/sql-server-ec2/latest/userguide/sql-server-on-ec2-overview.html)

**Videos**
+ [Performing a Lift and Shift Migration with AWS Application Migration Service](https://www.youtube.com/watch?v=tB0sAR3aCb4) (video)

# Migrate an F5 BIG-IP workload to F5 BIG-IP VE on the AWS Cloud
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud"></a>

*Deepak Kumar, Amazon Web Services*

## Summary
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-summary"></a>

Organizations are looking to migrate to the AWS Cloud to increase their agility and resilience. After you migrate your [F5 BIG-IP ](https://www.f5.com/products/big-ip-services)security and traffic management solutions to the AWS Cloud, you can focus on agility and adoption of high-value operational models across your enterprise architecture.

This pattern describes how to migrate an F5 BIG-IP workload to an [F5 BIG-IP Virtual Edition (VE)](https://www.f5.com/products/big-ip-services/virtual-editions) workload on the AWS Cloud. The workload will be migrated by rehosting the existing environment and deploying aspects of replatforming, such as service discovery and API integrations. [AWS CloudFormation templates](https://github.com/F5Networks/f5-aws-cloudformation) accelerate your workload’s migration to the AWS Cloud.

This pattern is intended for technical engineering and architectural teams that are migrating F5 security and traffic management solutions, and accompanies the guide [Migrating from F5 BIG-IP to F5 BIG-IP VE on the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-f5-big-ip/welcome.html) on the AWS Prescriptive Guidance website.

## Prerequisites and limitations
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-prereqs"></a>

**Prerequisites **
+ An existing on-premises F5 BIG-IP workload.
+ Existing F5 licenses for BIG-IP VE versions.
+ An active AWS account.
+ An existing virtual private cloud (VPC) configured with an egress through a NAT gateway or Elastic IP address, and configured with access to the following endpoints: Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), AWS Security Token Service (AWS STS), and Amazon CloudWatch. You can also modify the [Modular and scalable VPC architecture](https://aws.amazon.com/quickstart/architecture/vpc/) Quick Start as a building block for your deployments. 
+ One or two existing Availability Zones, depending on your requirements. 
+ Three existing private subnets in each Availability Zone.
+ AWS CloudFormation templates, [available in the F5 GitHub repository](https://github.com/F5Networks/f5-aws-cloudformation/blob/master/template-index.md). 

During the migration, you might also use the following, depending on your requirements:
+ An [F5 Cloud Failover Extension](https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/) to manage Elastic IP address mapping, secondary IP mapping, and route table changes. 
+ If you use multiple Availability Zones, you will need to use the F5 Cloud Failover Extensions to handle the Elastic IP mapping to virtual servers.
+ You should consider using [F5 Application Services 3 (AS3)](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/), [F5 Application Services Templates (FAST)](https://clouddocs.f5.com/products/extensions/f5-appsvcs-templates/latest/), or another infrastructure as code (IaC) model to manage the configurations. Preparing the configurations in an IaC model and using code repositories will help with the migration and your ongoing management efforts.

**Expertise**
+ This pattern requires familiarity with how one or more VPCs can be connected to existing data centers. For more information about this, see [Network-to-Amazon VPC connectivity options](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/network-to-amazon-vpc-connectivity-options.html) in the Amazon VPC documentation. 
+ Familiarity is also required with F5 products and modules, including [Traffic Management Operating System (TMOS)](https://www.f5.com/services/resources/white-papers/tmos-redefining-the-solution), [Local Traffic Manager (LTM)](https://www.f5.com/products/big-ip-services/local-traffic-manager), [Global Traffic Manager (GTM)](https://techdocs.f5.com/kb/en-us/products/big-ip_gtm/manuals/product/gtm-concepts-11-5-0/1.html#unique_9842886), [Access Policy Manager (APM)](https://www.f5.com/products/security/access-policy-manager), [Application Security Manager (ASM)](https://www.f5.com/pdf/products/big-ip-application-security-manager-overview.pdf), [Advanced Firewall Manager (AFM)](https://www.f5.com/products/security/advanced-firewall-manager), and [BIG-IQ](https://www.f5.com/products/automation-and-orchestration/big-iq).

**Product versions**
+ We recommend that you use F5 BIG-IP [version 13.1](https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/product/relnote-bigip-ve-13-1-0.html) or later, although the pattern supports F5 BIG-IP [version 12.1](https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/product/relnote-bigip-12-1-4.html) or later.

## Architecture
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-architecture"></a>

**Source technology stack**
+ F5 BIG-IP workload

**Target technology stack  **
+ Amazon CloudFront
+ CloudWatch
+ Amazon EC2
+ Amazon S3
+ Amazon VPC
+ AWS Global Accelerator
+ AWS STS
+ AWS Transit Gateway
+ F5 BIG-IP VE

**Target architecture **

![\[Architecture to migrate an F5 BIG-IP workload to an F5 BIG-IP VE workload.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/586fe806-fac1-48d3-9eb1-45a6c86430dc/images/16d7fc09-1ffe-4721-b503-d971db84cbae.png)


## Tools
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-tools"></a>
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) speeds up distribution of your web content by delivering it through a worldwide network of data centers, which lowers latency and improves performance.   
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Security Token Service (AWS STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) helps you request temporary, limited-privilege credentials for users.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects virtual private clouds (VPCs) and on-premises networks.
+ [Amazon Virtual Private Cloud (Amazon VPC) ](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-epics"></a>

### Discovery and assessment
<a name="discovery-and-assessment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Assess the performance of F5 BIG-IP. | Collect and record the performance metrics of the applications on the virtual server, and metrics of systems that will be migrated. This will help to correctly size the target AWS infrastructure for better cost optimization. | F5 Architect, Engineer and Network Architect, Engineer | 
| Evaluate the F5 BIG-IP operating system and configuration. | Evaluate which objects will be migrated and if a network structure needs to be maintained, such as VLANs. | F5 Architect, Engineer | 
| Evaluate F5 license options. | Evaluate which license and consumption model you will require. This assessment should be based on your evaluation of the F5 BIG-IP operating system and configuration. | F5 Architect, Engineer | 
| Evaluate the public applications. | Determine which applications will require public IP addresses. Align those applications to the required instances and clusters to meet performance and service-level agreement (SLA) requirements. | F5 Architect, Cloud Architect, Network Architect, Engineer, App Teams | 
| Evaluate internal applications. | Evaluate which applications will be used by internal users. Make sure you know where those internal users sit in the organization and how those environments connect to the AWS Cloud. You should also make sure those applications can use domain name system (DNS) as part of the default domain. | F5 Architect, Cloud Architect, Network Architect, Engineer, App Teams | 
| Finalize the AMI. | Not all F5 BIG-IP versions are created as Amazon Machine Images (AMIs). You can use the F5 BIG-IP Image Generator Tool if you have specific required quick-fix engineering (QFE) versions. For more information about this tool, see the "Related resources" section. | F5 Architect, Cloud Architect, Engineer | 
| Finalize the instance types and architecture. | Decide on the instance types, VPC architecture, and interconnected architecture. | F5 Architect, Cloud Architect, Network Architect, Engineer | 

### Complete security and compliance-related activities
<a name="complete-security-and-compliance-related-activities"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Document the existing F5 security policies. | Collect and document existing F5 security policies. Make sure you create a copy of them in a secure code repository. | F5 Architect, Engineer | 
| Encrypt the AMI. | (Optional) Your organization might require encryption of data at rest. For more information about creating a custom Bring Your Own License (BYOL) image, see the "Related resources" section. | F5 Architect, Engineer Cloud Architect, Engineer | 
| Harden the devices. | This will help protect against potential vulnerabilities. | F5 Architect, Engineer | 

### Configure your new AWS environment
<a name="configure-your-new-aws-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create edge and security accounts. | Sign in to the AWS Management Console and create the AWS accounts that will provide and operate the edge and security services. These accounts might be different from the accounts that operate VPCs for shared services and applications. This step can be completed as part of a landing zone. | Cloud Architect, Engineer | 
| Deploy edge and security VPCs. | Set up and configure the VPCs required to deliver edge and security services. | Cloud Architect, Engineer | 
| Connect to the source data center. | Connect to the source data center that hosts your F5 BIG-IP workload. | Cloud Architect, Network Architect, Engineer | 
| Deploy the VPC connections. | Connect the edge and security service VPCs to the application VPCs. | Network Architect, Engineer | 
| Deploy the instances. | Deploy the instances by using the CloudFormation templates from the "Related resources" section. | F5 Architect, Engineer | 
| Test and configure instance failover. | Make sure that the AWS Advanced HA iAPP template or F5 Cloud Failover Extension is configured and operating correctly. | F5 Architect, Engineer | 

### Configure networking
<a name="configure-networking"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the VPC topology. | Open the Amazon VPC console and make sure that your VPC has all the required subnets and protections for the F5 BIG-IP VE deployment. | Network Architect, F5 Architect, Cloud Architect, Engineer | 
| Prepare your VPC endpoints. | Prepare the VPC endpoints for Amazon EC2, Amazon S3, and AWS STS if an F5 BIG-IP workload does not have access to a NAT Gateway or Elastic IP address on a TMM interface. | Cloud Architect, Engineer | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the configuration. | Migrate the F5 BIG-IP configuration to F5 BIG-IP VE on the AWS Cloud. | F5 Architect, Engineer | 
| Associate the secondary IPs. | Virtual server IP addresses have a relationship with the secondary IP addresses assigned to the instances. Assign secondary IP addresses and make sure "Allow remap/reassignment" is selected. | F5 Architect, Engineer | 

### Test configurations
<a name="test-configurations"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the virtual server configurations. | Test the virtual servers. | F5 Architect, App Teams | 

### Finalize operations
<a name="finalize-operations"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the backup strategy. | Systems must be shut down to create a full snapshot. For more information, see "Updating an F5 BIG-IP virtual machine" in the "Related resources" section. | F5 Architect, Cloud Architect, Engineer | 
| Create the cluster failover runbook. | Make sure that the failover runbook process is complete. | F5 Architect, Engineer | 
| Set up and validate logging. | Configure F5 Telemetry Streaming to send logs to the required destinations. | F5 Architect, Engineer | 

### Complete the cutover
<a name="complete-the-cutover"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Cut over to the new deployment. |  | F5 Architect, Cloud Architect, Network Architect, Engineer, AppTeams | 

## Related resources
<a name="migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud-resources"></a>

**Migration guide**
+ [Migrating from F5 BIG-IP to F5 BIG-IP VE on the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-f5-big-ip/welcome.html)

**F5 resources**
+ [CloudFormation templates in the F5 GitHub repository](https://github.com/F5Networks/f5-aws-cloudformation)
+ [F5 in AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=74d946f0-fa54-4d9f-99e8-ff3bd8eb2745)
+ [F5 BIG-IP VE overview](https://www.f5.com/products/big-ip-services/virtual-editions) 
+ [Example Quickstart - BIG-IP Virtual Edition with WAF (LTM \$1 ASM)](https://github.com/F5Networks/f5-aws-cloudformation-v2/tree/main/examples/quickstart)
+ [F5 Application services on AWS: an overview (video)](https://www.youtube.com/watch?v=kutVjRHOAXo)
+ [F5 Application Services 3 Extension User Guide ](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/)
+ [F5 cloud documentation](https://clouddocs.f5.com/training/community/public-cloud/html/intro.html)
+ [F5 iControl REST wiki](https://clouddocs.f5.com/api/icontrol-rest/)
+ [F5 Overview of single configuration files (11.x - 15.x)](https://support.f5.com/csp/article/K13408)
+ [F5 whitepapers](https://www.f5.com/services/resources/white-papers)
+ [F5 BIG-IP Image Generator Tool](https://clouddocs.f5.com/cloud/public/v1/ve-image-gen_index.html)
+ [Updating an F5 BIG-IP VE virtual machine](https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-vmware-esxi-11-5-0/3.html)
+ [Overview of the UCS archive "platform-migrate" option](https://support.f5.com/csp/article/K82540512)

# Migrate an on-premises Go web application to AWS Elastic Beanstalk by using the binary method
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method"></a>

*Suhas Basavaraj and Shumaz Mukhtar Kazi, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-summary"></a>

This pattern describes how to migrate an on-premises Go web application to AWS Elastic Beanstalk. After the application is migrated, Elastic Beanstalk builds the binary for the source bundle and deploys it to an Amazon Elastic Compute Cloud (Amazon EC2) instance.

As a rehost migration strategy, this pattern’s approach is fast and requires no code changes, which means less testing and migration time. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An on-premises Go web application.
+ A GitHub repository that contains your Go application’s source code. If you do not use GitHub, there are other ways to [create an application source bundle for Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html).

**Product versions**
+ The most recent Go version supported by Elastic Beanstalk. For more information, see the [Elastic Beanstalk documentation](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.go).

## Architecture
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-architecture"></a>

**Source technology stack  **
+ An on-premises Go web application 

**Target technology stack**
+ AWS Elastic Beanstalk
+ Amazon CloudWatch

**Target architecture*** *

![\[Architecture for migrating a Go application to Elastic Beanstalk\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/cd8d660d-5621-4ea7-8f97-7a1e321c57d3/images/1df543d9-7073-43d8-abd3-f1f7e57278eb.png)


## Tools
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-tools"></a>
+ [AWS Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.html) quickly deploys and manages applications in the AWS Cloud without users having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control.
+ [GitHub](https://github.com/) is an open-source distributed version control system.

## Epics
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-epics"></a>

### Create the Go web application source bundle .zip file
<a name="create-the-go-web-application-source-bundle-zip-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the source bundle for the Go application.  | Open the GitHub repository that contains your Go application’s source code and prepare the source bundle. The source bundle contains an `application.go` source file in the root directory, which hosts the main package for your Go application. If you do not use GitHub, see the *Prerequisites* section earlier in this pattern for other ways to create your application source bundle. | System Admin, Application Developer | 
| Create a configuration file. | Create an `.ebextensions` folder in your source bundle, and then create an `options.config` file inside this folder. For more information, see the [Elastic Beanstalk documentation](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html). | System Admin, Application Developer | 
|  Create the source bundle .zip file. | Run the following command.<pre>git archive -o ../godemoapp.zip HEAD</pre>This creates the source bundle .zip file. Download and save the .zip file as a local file. The .zip file cannot exceed 512 MB and cannot include a parent folder or top-level directory. | System Admin, Application Developer | 

### Migrate the Go web application to Elastic Beanstalk
<a name="migrate-the-go-web-application-to-elastic-beanstalk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Choose the Elastic Beanstalk application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method.html)For instructions on how to create an Elastic Beanstalk application, see the [Elastic Beanstalk documentation](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.CreateApp.html). | System Admin, Application Developer | 
| Initiate the Elastic Beanstalk web server environment.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method.html) | System Admin, Application Developer | 
| Upload the source bundle .zip file to Elastic Beanstalk. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method.html) | System Admin, Application Developer | 
| Test the deployed Go web application. | You will be redirected to the Elastic Beanstalk application's overview page. At the top of the overview, next to **Environment ID**, choose the URL that ends in `elasticbeanstalk.com` to navigate to your application. Your application must use this name in its configuration file as an environment variable and display it on the web page. | System Admin, Application Developer | 

## Troubleshooting
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to access the application through an Application Load Balancer. | Check the target group that contains your Elastic Beanstalk application. If it’s unhealthy, log in to your Elastic Beanstalk instance and check the `nginx.conf` file configuration to verify that it routes to the correct health status URL. You might need to change the target group health check URL. | 

## Related resources
<a name="migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method-resources"></a>
+ [Go platform versions supported by Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.go)
+ [Using configuration files with Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html)
+ [Creating an example application in Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.CreateApp.html) 

# Migrate an on-premises SFTP server to AWS using AWS Transfer for SFTP
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp"></a>

*Akash Kumar, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-summary"></a>

This pattern describes how to migrate an on-premises file transfer solution that uses the Secure Shell (SSH) File Transfer Protocol (SFTP) to the AWS Cloud by using the AWS Transfer for SFTP service. Users generally connect to an SFTP server either through its domain name or by fixed IP. This pattern covers both cases.

AWS Transfer for SFTP is a member of the AWS Transfer Family. It is a secure transfer service that you can use to transfer files into and out of AWS storage services over SFTP. You can use AWS Transfer for SFTP with Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS). This pattern uses Amazon S3 for storage.

## Prerequisites and limitations
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An existing SFTP domain name or fixed SFTP IP.

**Limitations**
+ The largest object that you can transfer in one request is currently 5 GiB. For files that are larger than 100 MiB, consider using [Amazon S3 multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html).

## Architecture
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-architecture"></a>

**Source technology stack  **
+ On-premises flat files or database dump files.

**Target technology stack  **
+ AWS Transfer for SFTP
+ Amazon S3
+ Amazon Virtual Private Cloud (Amazon VPC)
+ AWS Identity and Access Management (IAM) roles and policies
+ Elastic IP addresses
+ Security groups
+ Amazon CloudWatch Logs (optional)

**Target architecture **

![\[Use AWS Transfer for SFTP to migrate an on-premises SFTP server to the AWS Cloud.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ec0a905c-edef-48ba-9b5e-ea4a4040d320/images/f42aa711-bfe0-4ac6-9f66-5c18a1dd1c7a.png)


**Automation and scale**

To automate the target architecture for this pattern, use the attached CloudFormation templates:
+ `amazon-vpc-subnets.yml` provisions a virtual private cloud (VPC) with two public and two private subnets.
+ `amazon-sftp-server.yml` provisions the SFTP server.
+ `amazon-sftp-customer.yml` adds users.

## Tools
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. This pattern uses Amazon S3 as the storage system for file transfers.
+ [AWS Transfer for SFTP](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html) helps you transfer files into and out of AWS storage services over the SFTP protocol.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-epics"></a>

### Create a VPC
<a name="create-a-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC with subnets. | Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/). Create a virtual private cloud (VPC) with two public subnets. (The second subnet provides high availability.)—or—You can deploy the attached CloudFormation template, `amazon-vpc-subnets.yml`, in the [CloudFormation console](https://console.aws.amazon.com/cloudformation) to automate the tasks in this epic. | Developer, Systems administrator | 
| Add an internet gateway. | Provision an internet gateway and attach it to the VPC. | Developer, Systems administrator | 
| Migrate an existing IP. | Attach an existing IP to the Elastic IP address. You can create an Elastic IP address from your address pool and use it. | Developer, Systems administrator | 

### Provision an SFTP server
<a name="provision-an-sftp-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SFTP server. | Open the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/). Follow the instructions in [Create an internet-facing endpoint for your server](https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#create-internet-facing-endpoint) in the AWS Transfer Family documentation to create an SFTP server with an internet-facing endpoint. For **Endpoint type**, choose **VPC hosted**. For **Access**, choose **Internet Facing**. For **VPC**, choose the VPC you created in the previous epic.—or—You can deploy the attached CloudFormation template, `amazon-sftp-server.yml`, in the [CloudFormation console](https://console.aws.amazon.com/cloudformation) to automate the tasks in this epic. | Developer, Systems administrator | 
| Migrate the domain name. | Attach the existing domain name to the custom hostname. If you're using a new domain name, use the **Amazon Route 53 DNS **alias. For an existing domain name, choose **Other DNS**. For more information, see [Working with custom hostnames](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html) in the AWS Transfer Family documentation. | Developer, Systems administrator | 
| Add a CloudWatch logging role. | (Optional) if you want to enable CloudWatch logging, create a `Transfer` role with the CloudWatch Logs API operations  `logs:CreateLogGroup`, `logs:CreateLogStream`,` logs:DescribeLogStreams`, and `logs:PutLogEvents`. For more information, see[ Log activity with CloudWatch](https://docs.aws.amazon.com/transfer/latest/userguide/monitoring.html#monitoring-enabling) in the AWS Transfer Family documentation. | Developer, system administrator | 
| Save and submit. | Choose **Save**. For **Actions**, choose **Start **and wait for the SFTP server to be created with the status **Online**. | Developer, Systems administrator | 

### Map Elastic IP addresses to the SFTP server
<a name="map-elastic-ip-addresses-to-the-sftp-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Stop the server so you can modify settings. | On the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/), choose **Servers**, and then select the SFTP server you created. For **Actions**, choose **Stop**. When the server is offline, choose **Edit **to modify its settings. | Developer, system administrator | 
| Choose Availability Zones and subnets. | In the **Availability Zones** section, choose the Availability Zones and subnets for your VPC. | Developer, Systems administrator | 
| Add Elastic IP addresses. | For **IPv4 Addresses**, choose an Elastic IP address for each subnet, and then choose **Save**. | Developer, Systems administrator | 

### Add users
<a name="add-users"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role for users to access the S3 bucket. | Create a IAM role for `Transfer`** **and add` s3:ListBucket`,` s3:GetBucketLocation`, and `s3:PutObject` with the S3 bucket name as a resource. For more information, see [Create an IAM role and policy](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-roles.html) in the AWS Transfer Family documentation.—or—You can deploy the attached CloudFormation template, `amazon-sftp-customer.yml`, in the [CloudFormation console](https://console.aws.amazon.com/cloudformation) to automate the tasks in this epic. | Developer, Systems administrator | 
| Create an S3 bucket. | Create a S3 bucket for the application. | Developer, Systems administrator | 
| Create optional folders. | (Optional) If you want to store files for users separately, in specific Amazon S3 folders, add folders as appropriate. | Developer, Systems administrator | 
| Create an SSH public key. | To create an SSH key pair, see [Generate SSH keys](https://docs.aws.amazon.com/transfer/latest/userguide/key-management.html#sshkeygen) in the AWS Transfer Family documentation. | Developer, Systems administrator | 
| Add users. | On the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/), choose **Servers**, select the SFTP server you created, and then choose **Add user**. For **Home directory**, choose the S3 bucket you created. For **SSH public key**, specify the public key portion of the SSH key pair. Add users for the SFTP server, and then choose **Add**. | Developer, Systems administrator | 

### Test the SFTP server
<a name="test-the-sftp-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the security group. | In the **Security Groups** section of your SFTP server, add your test machine's IP to gain SFTP access. | Developer | 
| Use an SFTP client utility to test the server. | Test file transfers by using any SFTP client utility. For a list of clients and instructions, see [Transferring files using a client](https://docs.aws.amazon.com/transfer/latest/userguide/transfer-file.html) in the AWS Transfer Family documentation. | Developer | 

## Related resources
<a name="migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp-resources"></a>
+ [AWS Transfer Family User Guide](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-for-sftp.html)
+ [Amazon S3 User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
+ [Elastic IP addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) in the Amazon EC2 documentation

## Attachments
<a name="attachments-ec0a905c-edef-48ba-9b5e-ea4a4040d320"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/ec0a905c-edef-48ba-9b5e-ea4a4040d320/attachments/attachment.zip)

# Migrate an on-premises VM to Amazon EC2 by using AWS Application Migration Service
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service"></a>

*Thanh Nguyen, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-summary"></a>

When it comes to application migration, organizations can take different approaches to rehost (lift and shift) the application’s servers from the on-premises environment to the Amazon Web Services (AWS) Cloud. One way is to provision new Amazon Elastic Compute Cloud (Amazon EC2) instances and then install and configure the application from scratch. Another approach is to use third-party or AWS native migration services to migrate multiple servers at the same time.

This pattern outlines the steps for migrating a supported virtual machine (VM) to an Amazon EC2 instance on the AWS Cloud by using AWS Application Migration Service. You can use the approach in this pattern to migrate one or multiple virtual machines manually, one by one, or automatically by creating appropriate automation scripts based on the outlined steps. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-prereqs"></a>

**Prerequisites**
+ An active AWS account in one of the AWS Regions that support Application Migration Service
+ Network connectivity between the source server and target EC2 server through a private network by using AWS Direct Connect or a virtual private network (VPN), or through the internet

**Limitations**
+ For the latest list of supported Regions, see the [Supported AWS Regions](https://docs.aws.amazon.com/mgn/latest/ug/supported-regions.html).
+ For a list of supported operating systems, see the [Supported operating systems](https://docs.aws.amazon.com/mgn/latest/ug/Supported-Operating-Systems.html) and the *General *section of [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/).

## Architecture
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-architecture"></a>

**Source technology stack**
+ A physical, virtual, or cloud-hosted server running an operating system supported by Amazon EC2

**Target technology stack**
+ An Amazon EC2 instance running the same operating system as the source VM
+ Amazon Elastic Block Store (Amazon EBS)

**Source and target architecture**

The following diagram shows the high-level architecture and main components of the solution. In the on-premises data center, there are virtual machines with local disks. On AWS, there is a staging area with replication servers and a migrated resources area with EC2 instances for test and cutover. Both subnets contain EBS volumes.

![\[Main components to migrate a supported VM to an Amazon EC2 instance on the AWS Cloud.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/58c8bafd-9a6d-42d4-a5ce-08c4b9a286a3/images/f8396fad-7ee9-4f75-800f-e819f509e151.png)


1. Initialize AWS Application Migration Service.

1. Set up the staging area server configuration and reporting, including staging area resources.

1. Install agents on source servers, and use continuous block-level data replication (compressed and encrypted).

1. Automate orchestration and system conversion to shorten the cutover window.

**Network architecture**

The following diagram shows the high-level architecture and main components of the solution from the networking perspective, including required protocols and ports for communication between primary components in the on-premises data center and on AWS.

![\[Networking components including protocols and ports for communication between data center and AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/58c8bafd-9a6d-42d4-a5ce-08c4b9a286a3/images/2f594daa-ddba-4841-8785-6067e8d83c2f.png)


## Tools
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-tools"></a>
+ [AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) helps you rehost (*lift and shift*) applications to the AWS Cloud without change and with minimal downtime.

## Best practices
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-best-practices"></a>
+ Do not take the source server offline or perform a reboot until the cutover to the target EC2 instance is complete.
+ Provide ample opportunity for the users to perform user acceptance testing (UAT) on the target server to identify and resolve any issues. Ideally, this testing should start at least two weeks before cutover.
+ Frequently monitor the server replication status on the Application Migration Service console to identify issues early on.
+ Use temporary AWS Identity and Access Management (IAM) credentials for agent installation instead of permanent IAM user credentials.

## Epics
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-epics"></a>

### Generate AWS credentials
<a name="generate-aws-credentials"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS Replication Agent IAM role. | Sign in with administrative permissions to the AWS account.On the AWS Identity and Access Management (IAM) [console](https://console.aws.amazon.com/iam/), create an IAM role:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.html) | AWS administrator, Migration engineer | 
| Generate temporary security credentials. | On a machine with AWS Command Line Interface (AWS CLI) installed, sign in with administrative permissions. Or alternatively (within a supported AWS Region), on the AWS Management Console, sign in with administrative permissions to the AWS account, and open AWS CloudShell.Generate temporary credentials with the following command, replacing `<account-id>` with the AWS account ID.`aws sts assume-role --role-arn arn:aws:iam::<account-id>:role/MGN_Agent_Installation_Role --role-session-name mgn_installation_session_role`From the output of the command, copy the values for `AccessKeyId`,** **`SecretAccessKey`, and** **`SessionToken`.** **Store them in a safe location for later use.These temporary credentials will expire after one hour. If you need credentials after one hour, repeat the previous steps. | AWS administrator, Migration engineer | 

### Initialize Application Migration Service and create the Replication Settings template
<a name="initialize-application-migration-service-and-create-the-replication-settings-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the service. | On the console, sign in with administrative permissions to the AWS account.Choose **Application Migration Service**, and then choose **Get started**. | AWS administrator, Migration engineer | 
| Create and configure the Replication Settings template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.html)Application Migration Service will automatically create all the IAM roles required to facilitate data replication and the launching of migrated servers. | AWS administrator, Migration engineer | 

### Install AWS Replication Agents on source machines
<a name="install-aws-replication-agents-on-source-machines"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Have the required AWS credentials ready. | When you run the installer file on a source server, you will need to enter the temporary credentials that you generated earlier, including `AccessKeyId`, `SecretAccessKey`, and `SessionToken`. | Migration engineer, AWS administrator | 
| For Linux servers, install the agent. | Copy the installer command, log in to your source servers, and run the installer. For detailed instructions, see the [AWS documentation](https://docs.aws.amazon.com/mgn/latest/ug/linux-agent.html). | AWS administrator, Migration engineer | 
| For Windows servers, install the agent. | Download the installer file to each server, and then run the installer command. For detailed instructions, see the [AWS documentation](https://docs.aws.amazon.com/mgn/latest/ug/windows-agent.html). | AWS administrator, Migration engineer | 
| Wait for initial data replication to be completed. | When the agent has been installed, the source server will appear on the Application Migration Service console, in the **Source servers** section. Wait while the server undergoes initial data replication. | AWS administrator, Migration engineer | 

### Configure launch settings
<a name="configure-launch-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Specify the server details. | On the Application Migration Service console, choose the **Source servers** section, and then choose a server name from the list to access the server details. | AWS administrator, Migration engineer | 
| Configure the launch settings.  | Choose the **Launch settings** tab. You can configure a variety of settings, including general launch settings and EC2 launch template settings. For detailed instructions, see the [AWS documentation](https://docs.aws.amazon.com/mgn/latest/ug/launch-settings.html). | AWS administrator, Migration engineer | 

### Perform a test
<a name="perform-a-test"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the source servers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.html)The servers will be launched. | AWS administrator, Migration engineer | 
| Verify that the test completed successfully. | After the test server is completely launched, the **Alerts** status on the page will show **Launched** for each server. | AWS administrator, Migration engineer | 
| Test the server. | Perform testing against the test server to ensure that it functions as expected. | AWS administrator, Migration engineer | 

### Schedule and perform a cutover
<a name="schedule-and-perform-a-cutover"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Schedule a cutover window. | Schedule an appropriate cutover timeframe with relevant teams. | AWS administrator, Migration engineer | 
| Perform the cutover. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service.html)The source server's **Migration lifecycle** will change to **Cutover in progress**. | AWS administrator, Migration engineer | 
| Verify that the cutover completed successfully. | After the cutover servers are completely launched, the **Alerts** status on the **Source Servers** page will show **Launched** for each server. | AWS administrator, Migration engineer | 
| Test the server. | Perform testing against the cutover server to ensure that it functions as expected. | AWS administrator, Migration engineer | 
| Finalize the cutover. | Choose **Test and Cutover**, and then select **Finalize cutover** to finalize the migration process. | AWS administrator, Migration engineer | 

## Related resources
<a name="migrate-an-on-premises-vm-to-amazon-ec2-by-using-aws-application-migration-service-resources"></a>
+ [AWS Application Migration Service](https://aws.amazon.com/application-migration-service/)
+ [AWS Application Migration Service User Guide](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html)

# Migrate small sets of data from on premises to Amazon S3 using AWS SFTP
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp"></a>

*Charles Gibson and Sergiy Shevchenko, Amazon Web Services*

## Summary
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-summary"></a>

This pattern describes how to migrate small sets of data (5 TB or less) from on-premises data centers to Amazon Simple Storage Service (Amazon S3) by using AWS Transfer for SFTP (AWS SFTP). The data can be either database dumps or flat files.

## Prerequisites and limitations
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An AWS Direct Connect link established between your data center and AWS

**Limitations**
+ The data files must be less than 5 TB. For files over 5 TB, you can perform a multipart upload to Amazon S3 or choose another data transfer method. 

## Architecture
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-architecture"></a>

**Source technology stack**
+ On-premises flat files or database dumps

**Target technology stack**
+ Amazon S3

**Source and target architecture**

![\[Diagram showing data flow from on-premises servers to AWS Cloud services via Direct Connect and VPN.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a9c016ff-3e68-4714-ac51-46cb4727397a/images/5c5bb9ea-d552-44e8-8d0d-df341f84f55d.png)


## Tools
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-tools"></a>
+ [AWS SFTP](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-for-sftp.html) – Enables the transfer of files directly into and out of Amazon S3 using Secure File Transfer Protocol (SFTP).
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) – Establishes a dedicated network connection from your on-premises data centers to AWS.
+ [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) – Enable you to privately connect a VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without an internet gateway, network address translation (NAT) device, VPN connection, or Direct Connect connection. Instances in a VPC don't require public IP addresses to communicate with resources in the service.

## Epics
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-epics"></a>

### Prepare for the migration
<a name="prepare-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Document the current SFTP requirements. |  | Application owner, SA | 
| Identify the authentication requirements. | Requirements may include key-based authentication, user name or password, or identity provider (IdP). | Application owner, SA | 
| Identify the application integration requirements. |  | Application owner | 
| Identify the users who require the service. |  | Application owner | 
| Determine the DNS name for the SFTP server endpoint. |  | Networking | 
| Determine the backup strategy. |  | SA, DBA (if data is transferred)  | 
| Identify the application migration or cutover strategy. |  | Application owner, SA, DBA | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create one or more virtual private clouds (VPCs) and subnets in your AWS account. |  | Application owner, AMS | 
| Create the security groups and network access control list (ACL). |  | Security, Networking, AMS | 
| Create the Amazon S3 bucket. |  | Application owner, AMS | 
| Create the AWS Identity and Access Management (IAM) role. | Create an IAM policy that includes the permissions to enable AWS SFTP to access your Amazon S3 bucket. This IAM policy determines what level of access you provide SFTP users. Create another IAM policy to establish a trust relationship with AWS SFTP. | Security, AMS | 
| Associate a registered domain (optional). | If you have your own registered domain, you can associate it with the SFTP server. You can route SFTP traffic to your SFTP server endpoint from a domain or from a subdomain. | Networking, AMS | 
| Create an SFTP server. | Specify the identity provider type used by the service to authenticate your users. | Application owner, AMS | 
| Open an SFTP client. | Open an SFTP client and configure the connection to use the SFTP endpoint host. AWS SFTP supports any standard SFTP client. Commonly used SFTP clients include OpenSSH, WinSCP, Cyberduck, and FileZilla. You can get the SFTP server host name from the AWS SFTP console. | Application owner, AMS | 

### Plan and test
<a name="plan-and-test"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Plan the application migration. | Plan for any application configuration changes required, set the migration date, and determine the test schedule. | Application owner, AMS | 
| Test the infrastructure. | Test in a non-production environment. | Application owner, AMS | 

## Related resources
<a name="migrate-small-sets-of-data-from-on-premises-to-amazon-s3-using-aws-sftp-resources"></a>

**References**
+ [AWS Transfer for SFTP User Guide](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-for-sftp.html)
+ [AWS Direct Connect resources](https://aws.amazon.com/directconnect/resources/) 
+ [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html)

**Tutorials and videos**
+ [AWS Transfer for SFTP (video)](https://www.youtube.com/watch?v=wcnGez5PP1E)
+ [AWS Transfer for SFTP user guide](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-for-sftp.html)
+ [AWS SA Whiteboarding - Direct Connect (video) ](https://www.youtube.com/watch?v=uP68iqyuqTg)

# Migrate an on-premises Oracle database to Oracle on Amazon EC2
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2"></a>

*Baji Shaik and Pankaj Choudhary, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-summary"></a>

This pattern walks you through the steps for migrating an on-premises Oracle database to Oracle on an Amazon Elastic Compute Cloud (Amazon EC2) instance. It describes two options for migration: using AWS Data Migration Service (AWS DMS) or using native Oracle tools such as RMAN, Data Pump import/export, transportable tablespaces, and Oracle GoldenGate. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source Oracle database in an on-premises data center

**Limitations**
+ The target operating system (OS) must be supported by Amazon EC2. For a complete list of supported systems, see [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/).

**Product versions**
+ Oracle versions 10.2 and later (for versions 10.x), 11g and up to 12.2, and 18c for the Enterprise, Standard, Standard One, and Standard Two editions. For the latest list of versions supported by AWS DMS, see "On-premises and Amazon EC2 instance databases" in [Sources for Data Migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html) in the AWS DMS documentation.  

## Architecture
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-architecture"></a>

**Source technology stack**
+ An on-premises Oracle database

**Target technology stack**
+ An Oracle database instance on Amazon EC2

**Target architecture**

![\[Setting up replication for an Oracle database on Amaozn EC2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/66c98694-6580-4ffb-9f16-84de58cf8b07/images/386d5b14-8633-4ecc-98fb-59872de99d41.png)


**Data migration architecture**

*Using AWS DMS:*

![\[Migrating an on-premises Oracle database to Amazon EC2 with AWS DMS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/66c98694-6580-4ffb-9f16-84de58cf8b07/images/14954066-d22b-486a-a432-265296752878.png)


*Using native Oracle tools:*

![\[Migrating an on-premises Oracle database to Amazon EC2 with Oracle tools.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/66c98694-6580-4ffb-9f16-84de58cf8b07/images/82ba5fcb-8640-45fa-b432-2702dedc0774.png)


## Tools
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-tools"></a>
+ **AWS DMS - **[AWS Database Migration Services](https://docs.aws.amazon.com/dms/index.html) (AWS DMS) supports several types of source and target databases. For information about the database versions and editions that are supported, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support.  
+ **Native Oracle tools - **RMAN, Data Pump import/export, transportable tablespaces, Oracle GoldenGate                                                         

## Epics
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Validate the versions of the source and target databases. |  | DBA | 
|  Identify the version of the target OS. |  | DBA, SysAdmin | 
| Identify hardware requirements for the target server instance based on the Oracle compatibility list and capacity requirements. |  | DBA, SysAdmin | 
| Identify storage requirements (storage type and capacity). |  | DBA, SysAdmin | 
| Identify network requirements (latency and bandwidth). |  | DBA, SysAdmin | 
| Choose the proper instance type based on capacity, storage features, and network features. |  | DBA, SysAdmin | 
| Identify network/host access security requirements for source and target databases. |  | DBA, SysAdmin | 
| Identify a list of OS users required for Oracle software installation. |  | DBA, SysAdmin | 
| Download AWS Schema Conversion Tool (AWS SCT) and drivers. |  | DBA | 
| Create an AWS SCT project for the workload, and connect to the source database. |  | DBA | 
| Generate SQL files for the creation of objects (tables, indexes, sequences, etc.). |  | DBA | 
| Determine a backup strategy. |  | DBA, SysAdmin  | 
| Determine availability requirements. |  | DBA | 
| Identify the application migration/switch-over strategy. |  | DBA, SysAdmin, App owner | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC) and subnets in your AWS account. |  | SysAdmin | 
| Create security groups and network access control lists (ACLs). |  | SysAdmin | 
| Configure and start the EC2 instance. |  | SysAdmin | 

### Install the Oracle software
<a name="install-the-oracle-software"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the OS users and groups required for the Oracle software. |  | DBA, SysAdmin | 
| Download the required version of Oracle software. |  |  | 
| Install the Oracle software on the EC2 instance. |  | DBA, SysAdmin | 
| Create objects like tables, primary keys, views, and sequences by using the scripts generated by AWS SCT. |  | DBA | 

### Migrate data - option 1
<a name="migrate-data---option-1"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use native Oracle tools or third-party tools to migrate database objects and data. | Oracle tools include Data Pump import/export, RMAN, transportable tablespaces, and GoldenGate. | DBA | 

### Migrate data - option 2
<a name="migrate-data---option-2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine the migration method. |  | DBA | 
| Create a replication instance in the AWS DMS console. |  | DBA | 
| Create source and target endpoints. |  | DBA | 
| Create a replication task. |  | DBA | 
| Enable change data capture (CDC) to capture changes for a continuous replication. |  | DBA | 
| Run the replication task and monitor logs. |  | DBA | 
| Create secondary objects like indexes and foreign keys when the full load is done. |  | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application migration strategy. |  | DBA, SysAdmin, App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application cutover/switch-over strategy. |  | DBA, SysAdmin, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down temporary AWS Secrets Manager resources. |  | DBA, SysAdmin | 
| Review and validate the project documents. |  | DBA, SysAdmin, App owner | 
| Gather metrics around time to migrate, % of manual vs. tool, cost savings, etc. |  | DBA, SysAdmin, App owner | 
| Close out the project and provide feedback. |  |  | 

## Related resources
<a name="migrate-an-on-premises-oracle-database-to-oracle-on-amazon-ec2-resources"></a>

**References**
+ [Strategies for Migrating Oracle Databases to AWS](https://docs.aws.amazon.com/whitepapers/latest/strategies-migrating-oracle-db-to-aws/strategies-migrating-oracle-db-to-aws.html) 
+ [Migrating Oracle databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/)
+ [Amazon EC2 website](https://aws.amazon.com/ec2/)
+ [AWS DMS website](https://aws.amazon.com/dms/)
+ [AWS DMS blog posts](https://aws.amazon.com/blogs/database/category/dms/)
+ [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/)
+ [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf)

**Tutorials and videos**
+ [Getting Started with Amazon EC2](https://aws.amazon.com/ec2/getting-started/)
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Introduction to Amazon EC2 - Elastic Cloud Server & Hosting with AWS (video)](https://www.youtube.com/watch?v=TsRBftzZsQo) 

# Migrate an on-premises Oracle database to Amazon EC2 by using Oracle Data Pump
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump"></a>

*Navakanth Talluri, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-summary"></a>

When migrating databases, you must consider factors such as the source and target database engines and versions, migration tools and services, and acceptable downtime periods. If you’re migrating an on-premises Oracle database to Amazon Elastic Compute Cloud (Amazon EC2), you can use Oracle tools, such as Oracle Data Pump and Oracle Recovery Manager (RMAN). For more information about strategies, see [Migrating Oracle databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/welcome.html).

Oracle Data Pump helps you extract the logical, consistent backup of the database and restore it to the target EC2 instance. This pattern describes how to migrate an on-premises Oracle database to an EC2 instance by using Oracle Data Pump and the `NETWORK_LINK` parameter, with minimal downtime. The `NETWORK_LINK` parameter starts an import through a database link. The Oracle Data Pump Import (impdp) client on the target EC2 instance connects to the source database, retrieves data from it, and writes the data directly to the database on the target instance. There are no backup, or *dump*, files used in this solution.

## Prerequisites and limitations
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An on-premises Oracle database that:
  + Isn’t an Oracle Real Application Clusters (RAC) database
  + Isn’t an Oracle Automatic Storage Management (Oracle ASM) database
  + Is in read-write mode.
+ You have created an AWS Direct Connect link between your on-premises data center and AWS. For more information, see [Create a connection](https://docs.aws.amazon.com/directconnect/latest/UserGuide/create-connection.html) (Direct Connect documentation).

**Product versions**
+ Oracle Database 10g release 1 (10.1) and later

## Architecture
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-architecture"></a>

**Source technology stack**
+ A standalone (non-RAC and non-ASM) Oracle database server in an on-premises data center

**Target technology stack**
+ An Oracle database running on Amazon EC2

**Target architecture**

The [reliability pillar](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html) of the AWS Well-Architected Framework recommends creating data backups to help provide high availability and resiliency. For more information, see [Architecting for high availability](https://docs.aws.amazon.com/whitepapers/latest/oracle-database-aws-best-practices/architecting-for-high-availability.html#amazon-ec2) in *Best Practices for Running Oracle Database on AWS*. This pattern sets up primary and standby databases on EC2 instances by using Oracle Active Data Guard. For high availability, the EC2 instances should be in different Availability Zones. However, the Availability Zones can be in the same AWS Region or in different AWS Regions.

Active Data Guard provides read-only access to a physical standby database and applies redo changes continuously from the primary database. Based on your recovery point objective (RPO) and recovery time objective (RTO), you can choose between synchronous and asynchronous redo transport options.

The following image shows the target architecture if the primary and standby EC2 instances are in different AWS Regions.

![\[Application connecting to the new database on the primary EC2 instance\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bdd49395-2f99-43e2-ad1d-a1d09d90fb58/images/37fcd4dc-5516-416b-a280-0c5f002880de.png)


**Data migration architecture**

After you have finished setting up the target architecture, you use Oracle Data Pump to migrate the on-premises data and schemas to the primary EC2 instance. During cutover, applications can’t access the on-premises database or the target database. You shut down these applications until they can be connected to the new target database on the primary EC2 instance.

The following image shows the architecture during the data migration. In this sample architecture, the primary and standby EC2 instances are in different AWS Regions.

![\[The source DB connects to the target DB. Applications are disconnected from source and target DBs\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bdd49395-2f99-43e2-ad1d-a1d09d90fb58/images/c58b669b-b11f-4d78-8911-c07b81b7c6a0.png)


## Tools
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-tools"></a>

**AWS services**
+ [AWS Direct Connect](https://aws.amazon.com/directconnect/) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.

**Other tools and services**
+ [Oracle Active Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7) helps you create, maintain, manage, and monitor standby databases.
+ [Oracle Data Pump](https://www.oracle.com/technetwork/documentation/data-pump-overview-084963.html) helps you move data and metadata from one database to another at high speeds.

## Best practices
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-best-practices"></a>
+ [Best Practices for Running Oracle Database on AWS](https://docs.aws.amazon.com/whitepapers/latest/oracle-database-aws-best-practices/architecting-for-security-and-performance.html)
+ [Importing data using NETWORK\$1LINK](https://docs.oracle.com/database/121/SUTIL/GUID-23E58D59-A477-4A87-BD0E-C82447581D0A.htm#SUTIL856)

## Epics
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-epics"></a>

### Set up the EC2 instances on AWS
<a name="set-up-the-ec2-instances-on-aws"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify the source hardware configuration for the on-premises host and the kernel parameters. | Validate the on-premises configuration, including storage size, input/output operations per second (IOPS), and CPU. This is important for Oracle licensing, which is based on CPU cores. | DBA, SysAdmin | 
| Create the infrastructure on AWS. | Create the virtual private clouds (VPCs), private subnets, security groups, network access control lists (ACLs), route tables, and internet gateway. For more information, see the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump.html) | DBA, AWS systems administrator | 
| Set up the EC2 instances by using Active Data Guard. | Configure AWS EC2 instances by using an Active Data Guard configuration, as described in the [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html). The version of Oracle Database on the EC2 instance can be different from the on-premises version because this pattern uses logical backups. Note the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump.html)For more information, see:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump.html) | DBA, AWS systems administrator | 

### Migrate the database to Amazon EC2
<a name="migrate-the-database-to-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a dblink to the on-premises database from the EC2 instance. | Create a database link (dblink) between the Oracle database on the EC2 instance and the on-premises Oracle database. For more information, see [Using Network Link Import to Move Data](https://docs.oracle.com/database/121/SUTIL/GUID-3E1D4B46-E856-4ABE-ACC5-977A898BB0F1.htm#SUTIL806) (Oracle documentation). | DBA | 
| Verify the connection between the EC2 instance and the on-premises host. | Use the dblink to confirm that the connection between the EC2 instance and the on-premises database is functioning. For instructions, see [CREATE DATABASE LINK](https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5005.htm) (Oracle documentation). | DBA | 
| Stop all applications connected to the on-premises database. | After the database downtime is approved, shut down any applications and dependent jobs that connent to your on-premises database. You can do this either from the application directly or from the database by using cron. For more information, see [Use the Crontab Utility to Schedule Tasks on Oracle Linux](https://docs.oracle.com/en/learn/oracle-linux-crontab/index.html). | DBA, App developer | 
| Schedule the data migration job.  | On the target host, use the command `impdb` to schedule the Data Pump import. This connects the target database to the on-premises host and starts the data migration. For more information, see [Data Pump Import](https://docs.oracle.com/database/121/SUTIL/GUID-D11E340E-14C6-43B8-AB09-6335F0C1F71B.htm#SUTIL300) and [NETWORK\$1LINK](https://docs.oracle.com/database/121/SUTIL/GUID-0871E56B-07EB-43B3-91DA-D1F457CF6182.htm#SUTIL919) (Oracle documentation). | DBA | 
| Validate the data migration. | Data validation is a crucial step. For data validation, you can use custom tools or Oracle tools, such as a combination of dblink and SQL queries. | DBA | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Put the source database in read-only mode. | Confirm that the application is shut down and no changes are being made to the source database. Open the source database in read-only mode. This helps you avoid any open transactions. For more information, see `ALTER DATABASE` in [SQL Statements](https://docs.oracle.com/database/121/SQLRF/statements_1006.htm#i2135540) (Oracle documentation). | DBA, DevOps engineer, App developer | 
| Validate the object count and data. | To validate the data and object, use custom tools or Oracle tools, such as a combination of dblink and SQL queries. | DBA, App developer | 
| Connect the applications to the database on the primary EC2 instance. | Change the application’s connection attribute to point to the new database you created on the primary EC2 instance. | DBA, App developer | 
| Validate the application performance. | Start the application. Validate the functionality and performance of the application by using [Automated Workload Repository](https://docs.oracle.com/database/121/RACAD/GUID-C3CD2DCE-38BD-46BA-BC32-7A28CAC9A7FD.htm#RACAD951) (Oracle documentation). | App developer, DevOps engineer, DBA | 

## Related resources
<a name="migrate-an-on-premises-oracle-database-to-amazon-ec2-by-using-oracle-data-pump-resources"></a>

**AWS references**
+ [Migrating Oracle databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/welcome.html)
+ [Amazon EC2 for Oracle](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/ec2-oracle.html)
+ [Migrating bulky Oracle databases to AWS for cross-platform environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/migrate-bulky-oracle-databases/welcome.html)
+ [VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
+ [Tutorial: Create a VPC for use with a database instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateVPC.html)

**Oracle references**
+ [Oracle Data Guard Configurations](https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-AB9DF863-2C7E-4767-81F2-56AD0FA30B49)
+ [Data Pump Import](https://docs.oracle.com/database/121/SUTIL/GUID-D11E340E-14C6-43B8-AB09-6335F0C1F71B.htm#SUTIL300)

# Migrate RHEL BYOL systems to AWS License-Included instances by using AWS MGN
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn"></a>

*Mike Kuznetsov, Amazon Web Services*

## Summary
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-summary"></a>

When you migrate your workloads to AWS by using AWS Application Migration Service (AWS MGN), you might have to lift and shift (rehost) your Red Hat Enterprise Linux (RHEL) instances and change the license from the default Bring Your Own License (BYOL) model to an AWS License Included (LI) model during migration. AWS MGN supports a scalable approach that uses Amazon Machine Image (AMI) IDs. This pattern describes how to accomplish the license change on RHEL servers during the rehost migration at scale. It also explains how to change the license for a RHEL system that’s already running on Amazon Elastic Compute Cloud (Amazon EC2).

## Prerequisites and limitations
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-prereqs"></a>

**Prerequisites **
+ Access to the target AWS account
+ AWS MGN initialized in the target AWS account and Region for the migration (not required if you have already migrated from your on-premises system to AWS)
+ A source RHEL server with a valid RHEL license

## Architecture
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-architecture"></a>

This pattern covers two scenarios:
+ Migrating a system from on premises directly into an AWS LI instance by using AWS MGN. For this scenario, follow the instructions in the first epic (*Migrate to LI instance - option 1*) and third epic.
+ Changing the licensing model from BYOL to LI for a previously migrated RHEL system that’s already running on Amazon EC2. For this scenario, follow the instructions in the second epic (*Migrate to LI instance* - *option 2*) and third epic.

**Note**  
The third epic involves reconfiguring the new RHEL instance to use the Red Hat Update Infrastructure (RHUI) servers provided by AWS. This process is the same for both scenarios.

## Tools
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-tools"></a>

**AWS services**
+ [AWS Application Migration Service (AWS MGN)](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) helps you rehost (lift and shift) applications to the AWS Cloud without change and with minimal downtime.

## Epics
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-epics"></a>

### Migrate to LI instance - option 1 (for an on-premises RHEL system)
<a name="migrate-to-li-instance---option-1-for-an-on-premises-rhel-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Find the AMI ID of the RHEL AWS LI instance in the target Region. | Visit [AWS Marketplace](https://aws.amazon.com/marketplace) or use the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) to find the RHEL AMI ID that matches the version of the RHEL source system (for example, RHEL-7.7), and write down the AMI ID. On the Amazon EC2 console, you can filter the AMIs by using one of the following search terms:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html) | Cloud administrator | 
| Configure AWS MGN launch settings.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html)AWS MGN will now use this version of the launch template to launch test or cutover instances. For more information, see the [AWS MGN documentation](https://docs.aws.amazon.com/mgn/latest/ug/ec2-launch.html). | Cloud administrator | 
| Validate settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html) | Cloud administrator | 
| Launch the new LI instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html) | Cloud administrator | 

### Migrate to LI instance - option 2 (for a RHEL BYOL EC2 instance)
<a name="migrate-to-li-instance---option-2-for-a-rhel-byol-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate your RHEL BYOL EC2 instance to an AWS LI instance. | You can switch RHEL systems that you previously migrated to AWS as BYOL to AWS LI instances by moving their disks (Amazon Elastic Block Store volumes) and attaching them to a new LI instance. To make this switch, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html) | Cloud administrator | 

### Reconfigure RHEL OS to use AWS-provided RHUI – both options
<a name="reconfigure-rhel-os-to-use-aws-provided-rhui-ndash-both-options"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deregister the OS from the Red Hat subscription and license. | After migration and successful cutover, the RHEL system has to be removed from the Red Hat subscription to stop consuming the Red Hat license and avoid double billing.To remove RHEL OS from the Red Hat subscription, follow the process described in the [Red Hat Subscription Management (RHSM) documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-subscription-management-unregistering). Use the CLI command:  <pre>subscription-manager unregister</pre>You can also disable the subscription manager plugin to stop checking the status of the subscription on every **yum** call. To do this, edit the configuration file `/etc/yum/pluginconf.d/subscription-manager.conf` and change the parameter `enabled=1` to `enabled=0`. | Linux or system administrator | 
| Replace the old update configuration (RHUI, Red Hat Satellite network, yum repositories) with the AWS-provided RHUI. | You must reconfigure the migrated RHEL system to use the AWS-provided RHUI servers. This gives you access to the RHUI servers within AWS Regions without requiring the external update infrastructure. The change involves the following process:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html)Here are the detailed steps and commands:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.html) | Linux or system administrator | 
| Validate the configuration. | On the target migrated instance, verify that the new configuration is correct:<pre>sudo yum clean all <br />sudo yum repolist </pre> | Linux or system administrator | 

## Related resources
<a name="migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn-resources"></a>
+ [AWS Application Migration Service (AWS MGN) User Guide](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html)
+ [Get an AWS RHUI client package supporting IMDSv2](https://access.redhat.com/solutions/5009491) (Red Hat Knowledgebase article)
+ [Amazon EC2 launch templates](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) (Amazon EC2 documentation)

# Migrate an on-premises Microsoft SQL Server database to Amazon EC2
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2"></a>

*Senthil Ramasamy, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-summary"></a>

This pattern describes how to migrate an on-premises Microsoft SQL Server database to Microsoft SQL Server on an Amazon Elastic Compute Cloud (Amazon EC2) instance. It covers two options for migration: using AWS Database Migration Service (AWS DMS) or using native Microsoft SQL Server tools such as backup and restore, Copy Database Wizard, or copy and attach database. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An operating system supported by Amazon EC2 (for a complete list of supported operating system versions, see [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/))
+ A Microsoft SQL Server source database in an on-premises data center

**Product versions**
+ For on-premises and Amazon EC2 instance databases, AWS DMS supports: 
  + SQL Server versions 2005, 2008, 2008R2, 2012, 2014, 2016, 2017, and 2019 
  + Enterprise, Standard, Workgroup, Developer, and Web editions
+ For the latest list of supported versions, see [Using a Microsoft SQL Server Database as a Target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html).   

## Architecture
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-architecture"></a>

**Source technology stack**
+ On-premises Microsoft SQL Server database

**Target technology stack**
+ Microsoft SQL Server database on an EC2 instance

**Target architecture**

![\[Primary and standby Microsoft SQL Server instances on EC2 instances in two Availability Zones.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f0a155b3-4977-4e1f-8332-89eab29c1e25/images/53e2c27d-ceb4-4d88-a022-93dd0b343eaf.png)


**Data migration architecture**
+ Using AWS DMS

![\[Migrating on-premises SQL Server data to an EC2 instance by using AWS DMS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f0a155b3-4977-4e1f-8332-89eab29c1e25/images/1cbe32ea-e285-4cac-9153-4428bad9b229.png)

+ Using native SQL Server tools 

![\[Migrating on-premises SQL Server data to an EC2 instance by using native SQL Server tools.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f0a155b3-4977-4e1f-8332-89eab29c1e25/images/ad2caf54-7399-4038-91a3-acba9fa7da29.png)


## Tools
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-tools"></a>
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/) helps you migrate your data to and from widely used commercial and open-source databases, including Oracle, SQL Server, MySQL, and PostgreSQL. You can use AWS DMS to migrate your data into the AWS Cloud, between on-premises instances (through an AWS Cloud setup), or between combinations of cloud and on-premises setups.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.
+ Native Microsoft SQL Server tools include backup and restore, Copy Database Wizard, and copy and attach database.

## Epics
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database versions. |  | DBA | 
| Identify the target operating system version. |  | DBA, Systems administrator | 
| Identify the hardware requirements for the target server instance based on the Microsoft SQL Server compatibility list and capacity requirements. |  | DBA, Systems administrator | 
| Identify the storage requirements for type and capacity. |  | DBA, Systems administrator | 
| Identify the network requirements, including latency and bandwidth. |  | DBA, Systems administrator | 
| Choose the EC2 instance type based on capacity, storage features, and network features. |  | DBA, Systems administrator | 
| Identify the network and host access security requirements for the source and target databases. |  | DBA, Systems administrator | 
| Identify a list of users required for the Microsoft SQL Server software installation. |  | DBA, Systems administrator | 
| Determine the backup strategy. |  | DBA | 
| Determine the availability requirements. |  | DBA | 
| Identify the application migration and cutover strategy. |  | DBA, Systems administrator | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC) and subnets. |  | Systems administrator | 
| Create security groups and network access control list (ACL). |  | Systems administrator | 
| Configure and start an EC2 instance. |  | Systems administrator | 

### Install the software
<a name="install-the-software"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the users and groups required for Microsoft SQL Server software. |  | DBA, Systems administrator | 
| Download the Microsoft SQL Server software. |  | DBA, Systems administrator | 
| Install Microsoft SQL Server software on the EC2 instance and configure the server. |  | DBA, Systems administrator | 

### Migrate the data - option 1
<a name="migrate-the-data---option-1"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use native Microsoft SQL Server tools or third-party tools to migrate the database objects and data. | Tools include backup and restore, Copy Database Wizard, and copy and attach database. For more information, see the guide [Migrating Microsoft SQL Server databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/). | DBA | 

### Migrate the data - option 2
<a name="migrate-the-data---option-2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the data by using AWS DMS. | For more information about using AWS DMS, see the links in the [Related resources](#migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-resources) section. | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application migration strategy. | Use AWS Schema Conversion Tool (AWS SCT) to analyze and modify SQL code that’s embedded in application source code. | DBA, App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application switch-over strategy. |  | DBA, App owner, Systems administrator | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down all temporary AWS resources. | Temporary resources include the AWS DMS replication instance and the EC2 instance for AWS SCT. | DBA, Systems administrator | 
| Review and validate the project documents. |  | DBA, App owner, Systems administrator | 
| Gather metrics around time to migrate, percent of manual versus tool cost savings, and so on. |  | DBA, App owner, Systems administrator | 
| Close the project and provide feedback. |  | DBA, App owner, Systems administrator | 

## Related resources
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-ec2-resources"></a>

**References**
+ [Migrating Microsoft SQL Server databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/)
+ [Amazon EC2](https://aws.amazon.com/ec2/)
+ [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/)
+ [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/)
+ [AWS Database Migration Service](https://aws.amazon.com/dms/)
+ [Microsoft Products on AWS](https://aws.amazon.com/windows/products/)
+ [Microsoft Licensing on AWS](https://aws.amazon.com/windows/resources/licensing/)
+ [Microsoft SQL Server on AWS](https://aws.amazon.com/windows/products/sql/)

**Tutorials and videos**
+ [Getting Started with ](https://aws.amazon.com/ec2/getting-started/)Amazon EC2
+ [Getting Started with ](https://aws.amazon.com/dms/getting-started/)AWS Database Migration Service
+ [Join an Amazon EC2 instance to your Simple AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/simple_ad_join_instance.html)
+ [Join an Amazon EC2 instance to your AWS Managed Microsoft AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_join_instance.html)
+ [AWS Database Migration Service ](https://www.youtube.com/watch?v=zb4GcjEdl8U)(video)
+ [Introduction to Amazon EC2 – Elastic Cloud Server & Hosting with AWS](https://www.youtube.com/watch?v=TsRBftzZsQo) (video)

# Rehost on-premises workloads in the AWS Cloud: migration checklist
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist"></a>

*Srikanth Rangavajhala, Amazon Web Services*

## Summary
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-summary"></a>

Rehosting on-premises workloads in the Amazon Web Services (AWS) Cloud involves the following migration phases: planning, pre-discovery, discovery, build, test, and cutover. This pattern outlines the phases and their related tasks. The tasks are described at a high level and support about 75% of all application workloads. You can implement these tasks over two to three weeks in an agile sprint cycle.

You should review and vet these tasks with your migration team and consultants. After the review, you can gather the input, eliminate or re-evaluate tasks as necessary to meet your requirements, and modify other tasks to support at least 75% of the application workloads in your portfolio. You can then use an agile project management tool such as Atlassian Jira or Rally Software to import the tasks, assign them to resources, and track your migration activities. 

The pattern assumes that you're using [AWS Cloud Migration Factory](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/solution-overview.html) to rehost your workloads, but you can use your migration tool of choice.

Amazon Macie can help identify sensitive data in your knowledge bases, stored as data sources, model invocation logs, and prompt stores in Amazon Simple Storage Service (Amazon S3) buckets. For more information, see the [Macie documentation](https://docs.aws.amazon.com/macie/latest/user/data-classification.html).

## Prerequisites and limitations
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-prereqs"></a>

**Prerequisites**
+ Project management tool for tracking migration tasks (for example, Atlassian Jira or Rally Software)
+ Migration tool for rehosting your workloads on AWS (for example, [Cloud Migration Factory](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/solution-overview.html))

## Architecture
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-architecture"></a>

**Source platform  **
+ On-premises source stack (including technologies, applications, databases, and infrastructure)  

**Target platform**
+ AWS Cloud target stack (including technologies, applications, databases, and infrastructure) 

**Architecture**

The following diagram illustrates rehosting (discovering and migrating servers from an on-premises source environment to AWS) by using Cloud Migration Factory and AWS Application Migration Service.

![\[Rehosting servers on AWS by using Cloud Migration Factory and Application Migration Service\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8e2d2d72-30cc-4e98-8abd-ac2ef95e599b/images/735ad65b-2646-4803-82c9-f7f93369b3a5.png)


## Tools
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-tools"></a>
+ You can use a migration and project management tool of your choice.

## Epics
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-epics"></a>

### Planning phase
<a name="planning-phase"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Groom the pre-discovery backlog. | Conduct the pre-discovery backlog grooming working session with department leads and application owners.  | Project manager, Agile scrum leader | 
|  Conduct the sprint planning working session. | As a scoping exercise, distribute the applications that you want to migrate across sprints and waves. | Project manager, Agile scrum leader | 

### Pre-discovery phase
<a name="pre-discovery-phase"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm application knowledge. | Confirm and document the application owner and their knowledge of the application. Determine whether there's another point person for technical questions. | Migration specialist (interviewer) | 
| Determine application compliance requirements. | Confirm with the application owner that the application doesn't have to comply with requirements for Payment Card Industry Data Security Standard (PCI DSS), Sarbanes-Oxley Act (SOX), personally identifiable information (PII), or other standards. If compliance requirements exist, teams must finish their compliance checks on the servers that will be migrated. | Migration specialist (interviewer) | 
| Confirm production release requirements.  | Confirm the requirements for releasing the migrated application to production (such as release date and downtime duration) with the application owner or technical contact. | Migration specialist (interviewer) | 
| Get server list. | Get the list of servers that are associated with the targeted application. | Migration specialist (interviewer) | 
| Get the logical diagram that shows the current state. | Obtain the current state diagram for the application from the enterprise architect or the application owner. | Migration specialist (interviewer) | 
| Create a logical diagram that shows the target state. | Create a logical diagram of the application that shows the target architecture on AWS. This diagram should illustrate servers, connectivity, and mapping factors. | Enterprise architect, Business owner | 
| Get server information. | Collect information about the servers that are associated with the application, including their configuration details. | Migration specialist (interviewer) | 
| Add server information to the discovery template. | Add detailed server information to the application discovery template (see `mobilize-application-questionnaire.xlsx` in the attachment for this pattern). This template includes all the application-related security, infrastructure, operating system, and networking details. | Migration specialist (interviewer) | 
| Publish the application discovery template. | Share the application discovery template with the application owner and migration team for common access and use. | Migration specialist (interviewer) | 

### Discovery phase
<a name="discovery-phase"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm server list. | Confirm the list of servers and the purpose of each server with the application owner or technical lead. | Migration specialist | 
| Identify and add server groups. | Identify server groups such as web servers or application servers, and add this information to the application discovery template. Select the tier of the application (web, application, database) that each server should belong to. | Migration specialist | 
| Fill in the application discovery template. | Complete the details of the application discovery template with the help of the migration team, application team, and AWS. | Migration specialist | 
| Add missing server details (middleware and OS teams). | Ask middleware and operating system (OS) teams to review the application discovery template and add any missing server details, including database information. | Migration specialist | 
| Get inbound/outbound traffic rules (network team). | Ask the network team to get the inbound/outbound traffic rules for the source and destination servers. The network team should also add existing firewall rules, export these to a security group format, and add existing load balancers to the application discovery template. | Migration specialist | 
| Identify required tagging. | Determine the tagging requirements for the application. | Migration specialist | 
| Create firewall request details. | Capture and filter the firewall rules that are required to communicate with the application.  | Migration specialist, Solutions architect, Network lead  | 
| Update the EC2 instance type. | Update the Amazon Elastic Compute Cloud (Amazon EC2) instance type to be used in the target environment, based on infrastructure and server requirements.  | Migration specialist, Solutions architect, Network lead | 
| Identify the current state diagram. | Identify or create the diagram that shows the current state of the application. This diagram will be used in the information security (InfoSec) request.  | Migration specialist, Solutions architect | 
| Finalize the future state diagram. | Finalize the diagram that shows the future (target) state for the application. This diagram will also be used in the InfoSec request.   | Migration specialist, Solutions architect | 
| Create firewall or security group service requests. | Create firewall or security group service requests (for development/QA, pre-production, and production). If you're using Cloud Migration Factory, include replication-specific ports if they're not already open.  | Migration specialist, Solutions architect, Network lead | 
| Review firewall or security group requests (InfoSec team). | In this step, the InfoSec team reviews and approves the firewall or security group requests that were created in the previous step.  | InfoSec engineer, Migration specialist | 
| Implement firewall security group requests (network team). | After the InfoSec team approves the firewall requests, the network team implements the required inbound/outbound firewall rules.  | Migration specialist, Solutions architect, Network lead | 

### Build phase (repeat for development/QA, pre-production, and production environments)
<a name="build-phase-repeat-for-development-qa-pre-production-and-production-environments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Import the application and server data. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist.html)If you aren't using Cloud Migration Factory, follow the instructions for setting up your migration tool. | Migration specialist, Cloud administrator | 
| Check prerequisites for source servers. | Connect with the in-scope source servers to verify prerequisites such as TCP port 1500, TCP port 443, root volume free space, .NET Framework version, and other parameters. These are required for replication. For additional information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#prerequisites-2). | Migration specialist, Cloud administrator | 
| Create a service request to install replication agents.  | Create a service request to install replication agents on the in-scope servers for development/QA, pre-production, or production. | Migration specialist, Cloud administrator | 
| Install the replication agents. | Install the replication agents on the in-scope source servers on the development/QA, pre-production, or production machines. For additional information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#install-the-replication-agents). | Migration specialist, Cloud administrator | 
| Push the post-launch scripts. | Application Migration Service supports post-launch scripts to help you automate OS-level activities such as installing or uninstalling software after you launch target instances. This step pushes the post-launch scripts to Windows or Linux machines, depending on the servers identified for migration. For instructions, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#push-the-post-launch-scripts). | Migration specialist, Cloud administrator | 
| Verify the replication status. | Confirm the replication status for the in-scope source servers automatically by using the provided script. The script repeats every five minutes until the status of all source servers in the given wave changes to **Healthy**. For instructions, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#verify-the-replication-status). | Migration specialist, Cloud administrator | 
| Create the admin user. | A local admin or sudo user on source machines might be needed to troubleshoot any issues after migration cutover from the in-scope source servers to AWS. The migration team uses this user to log in to the target server when the authentication server (for example, the DC or LDAP server) is not reachable. For instructions for this step, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/step4.html#add-a-user-to-the-admin-group). | Migration specialist, Cloud administrator | 
| Validate the launch template. | Validate the server metadata to make sure it works successfully and has no invalid data. This step validates both test and cutover metadata. For instructions, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#validate-launch-template-1). | Migration specialist, Cloud administrator | 

### Test phase (repeat for development/QA, pre-production, and production environments)
<a name="test-phase-repeat-for-development-qa-pre-production-and-production-environments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a service request. | Create a service request for the infrastructure team and other teams to perform application cutover to development/QA, pre-production, or production instances.  | Migration specialist, Cloud administrator | 
| Configure a load balancer (optional). | Configure required load balancers, such as an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) or an [F5 load balancer](https://www.f5.com/resources/white-papers/load-balancing-101-nuts-and-bolts) with iRules. | Migration specialist, Cloud administrator | 
| Launch instances for testing. | Launch all target machines for a given wave in Application Migration Service in test mode. For additional information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#launch-instances-for-testing). | Migration specialist, Cloud administrator | 
| Verify the target instance status. | Verify the status of the target instance by checking the bootup process for all in-scope source servers in the same wave. It may take up to 30 minutes for the target instances to boot up. You can check the status manually by logging in to the Amazon EC2 console, searching for the source server name, and reviewing the **Status check** column. The status **2/2 checks passed** indicates that the instance is healthy from an infrastructure perspective. | Migration specialist, Cloud administrator | 
| Modify DNS entries. | Modify Domain Name System (DNS) entries. (Use `resolv.conf` or `host.conf` for a Microsoft Windows environment.) Configure each EC2 instance to point to the new IP address of this host.Make sure that there are no DNS conflicts between on-premises and AWS Cloud servers. This step and the following steps are optional, depending on the environment where the server is hosted. | Migration specialist, Cloud administrator | 
| Test connectivity to backend hosts from EC2 instances. | Check the logins by using the domain credentials for the migrated servers. | Migration specialist, Cloud administrator | 
| Update the DNS A record. | Update the DNS A record for each host to point to the new Amazon EC2 private IP address. | Migration specialist, Cloud administrator | 
| Update the DNS CNAME record. | Update the DNS CNAME record for virtual IPs (load balancer names) to point to the cluster for web and application servers. | Migration specialist, Cloud administrator | 
| Test the application in applicable environments. | Log in to the new EC2 instance and test the application in the development/QA, pre-production, and production environments. | Migration specialist, Cloud administrator | 
| Mark as ready for cutover. | When testing is complete, change the status of the source server to indicate that it's ready for cutover, so users can launch a cutover instance. For instructions, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#mark-as-ready-for-cutover). | Migration specialist, Cloud administrator | 

### Cutover phase
<a name="cutover-phase"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create production deployment plan. | Create a production deployment plan (including a backout plan). | Migration specialist, Cloud administrator | 
| Notify operations team of downtime. | Notify the operations team of the downtime schedule for the servers. Some teams might require a change request or service request (CR/SR) ticket for this notification. | Migration specialist, Cloud administrator | 
| Replicate production machines. | Replicate production machines by using Application Migration Service or another migration tool. | Migration specialist, Cloud administrator | 
| Shut down in-scope source servers. | After you verify the source servers’ replication status, you can shut down the source servers to stop transactions from client applications to the servers. You can shut down the source servers in the cutover window. For more information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#shut-down-the-in-scope-source-servers). | Cloud administrator | 
| Launch instances for cutover. | Launch all target machines for a given wave in Application Migration Service in cutover mode. For more information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-factory-web-console.html#launch-instances-for-cutover). | Migration specialist, Cloud administrator | 
| Retrieve target instance IPs. | Retrieve the IPs for target instances. If the DNS update is a manual process in your environment, you would need to get the new IP addresses for all target instances. For more information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-command-prompt.html#retrieve-the-target-instance-ip). | Migration specialist, Cloud administrator | 
| Verify target server connections. | After you update the DNS records, connect to the target instances with the host name to verify the connections. For more information, see the [Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/list-of-automated-migration-activities-using-command-prompt.html#verify-the-target-server-connections). | Migration specialist, Cloud administrator | 

## Related resources
<a name="rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist-resources"></a>
+ [How to migrate](https://aws.amazon.com/migrate-modernize-build/cloud-migration/how-to-migrate/)
+ [AWS Cloud Migration Factory Implementation Guide](https://docs.aws.amazon.com/solutions/latest/cloud-migration-factory-on-aws/solution-overview.html)
+ [Automating large-scale server migrations with Cloud Migration Factory](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-factory-cloudendure/welcome.html)
+ [AWS Application Migration Service User Guide](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html)
+ [AWS Migration Acceleration Program](https://aws.amazon.com/migration-acceleration-program/)

## Attachments
<a name="attachments-8e2d2d72-30cc-4e98-8abd-ac2ef95e599b"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/8e2d2d72-30cc-4e98-8abd-ac2ef95e599b/attachments/attachment.zip)

# Set up Multi-AZ infrastructure for a SQL Server Always On FCI by using Amazon FSx
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx"></a>

*Manish Garg, T.V.R.L.Phani Kumar Dadi, Nishad Mankar, and RAJNEESH TYAGI, Amazon Web Services*

## Summary
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-summary"></a>

If you need to migrate a large number of Microsoft SQL Server Always On Failover Cluster Instances (FCIs) quickly, this pattern can help you minimize provisioning time. By using automation and Amazon FSx for Windows File Server, it reduces manual efforts, human-made errors, and the time required to deploy  a large number of clusters.

This pattern sets up the infrastructure for SQL Server FCIs in a Multi-Availability Zone (Multi-AZ) deployment on Amazon Web Services (AWS). The provisioning of the AWS services required for this infrastructure is automated by using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) templates. SQL Server installation and cluster node creation on an [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) instance is performed by using PowerShell commands.

This solution uses a highly available Multi-AZ [Amazon FSx for Windows](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) file system as the shared witness for storing the SQL Server database files. The Amazon FSx file system and EC2 Windows instances that host SQL Server are joined to the same AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) domain.

## Prerequisites and limitations
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An AWS user with sufficient permissions to provision resources using AWS CloudFormation templates
+ AWS Directory Service for Microsoft Active Directory
+ Credentials in AWS Secrets Manager to authenticate to AWS Managed Microsoft AD in a key-value pair:
  + `ADDomainName`: <Domain Name>
  + `ADDomainJoinUserName`: <Domain Username>
  + `ADDomainJoinPassword`:<Domain User Password>
  + `TargetOU`: <Target OU Value>
**Note**  
You will use the same key name in AWS Systems Manager automation for the AWS Managed Microsoft AD join activity.
+ SQL Server media files for SQL Server installation and Windows service or domain accounts created, which will be used during cluster creation
+ A virtual private cloud (VPC), with two public subnets in separate Availability Zones, two private subnets in the Availability Zones, an internet gateway, NAT gateways, route table associations, and a jump server

**Product versions**
+ Windows Server 2012 R2 and Microsoft SQL Server 2016

## Architecture
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-architecture"></a>

**Source technology stack**
+ On-premises SQL Server with FCIs using a shared drive

**Target technology stack**
+ AWS EC2 instances
+ Amazon FSx for Windows File Server
+ AWS Systems Manager Automation runbook
+ Network configurations (VPC, subnets, internet gateway, NAT gateways, jump server, security groups)
+ AWS Secrets Manager
+ AWS Managed Microsoft AD
+ Amazon EventBridge
+ AWS Identity and Access Management (IAM)

**Target architecture**

The following diagram shows an AWS account in a single AWS Region, with a VPC that includes two Availability Zones, two public subnets with NAT gateways, a jump server in the first public subnet, two private subnets, each with an EC2 instance for a SQL Server node in a node security group, and an Amazon FSx file system connecting to each of the SQL Server nodes. AWS Directory Service, Amazon EventBridge, AWS Secrets Manager, and AWS Systems Manager are also included.

![\[Multi-AZ architecture with resources in public and private subnets, with node security groups.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f09c0164-be2d-4665-a574-7ec29fd25082/images/543829a9-e130-4542-9c4e-7518c6cbe967.png)


**Automation and scale**
+ You can use AWS Systems Manager to join AWS Managed Microsoft AD and perform the SQL Server installation.

## Tools
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Directory Service](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) provides multiple ways to use Microsoft Active Directory (AD) with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS) for SQL Server, and Amazon FSx for Windows File Server.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.

**Other tools**
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS. This pattern uses PowerShell scripts.

**Code repository**

The code for this pattern is available in the GitHub [aws-windows-failover-cluster-automation](https://github.com/aws-samples/aws-windows-failover-cluster-automation) repository.

## Best practices
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-best-practices"></a>
+ The IAM roles that are used to deploy this solution should adhere to the principle of least privilege. For more information, see the [IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).
+ Follow the [AWS CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html).

## Epics
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-epics"></a>

### Deploy the infrastructure
<a name="deploy-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Systems Manager CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | AWS DevOps, DevOps engineer | 
| Deploy the infrastructure stack. | After successful deployment of the Systems Manager stack, create the `infra` stack, which includes EC2 instance nodes, security groups, the Amazon FSx for Windows File Server file system, and the IAM role.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | AWS DevOps, DevOps engineer | 

### Set up the Windows SQL Server Always On FCI
<a name="set-up-the-windows-sql-server-always-on-fci"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Windows tools. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | AWS DevOps, DevOps engineer, DBA | 
| Prestage the cluster computer objects in Active Directory Domain Services. | To prestage the cluster name object (CNO) in Active Directory Domain Services (AD DS) and prestage a virtual computer object (VCO) for a clustered role, follow the instructions in the [Windows Server documentation](https://learn.microsoft.com/en-us/windows-server/failover-clustering/prestage-cluster-adds). | AWS DevOps, DBA, DevOps engineer | 
| Create the WSFC. | To create the Windows Server Failover Clustering (WSFC) cluster, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | AWS DevOps, DBA, DevOps engineer | 
| Install the SQL Server failover cluster. | After the WSFC cluster is set up, install the SQL Server cluster on the primary instance (node1).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html)<pre>D:\setup.exe /Q  `<br />/ACTION=InstallFailoverCluster `<br />/IACCEPTSQLSERVERLICENSETERMS `<br />/FEATURES="SQL,IS,BC,Conn"  `<br />/INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"  `<br />/INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"  `<br />/RSINSTALLMODE="FilesOnlyMode"  `<br />/INSTANCEID="MSSQLSERVER" `<br />/INSTANCENAME="MSSQLSERVER"  `<br />/FAILOVERCLUSTERGROUP="SQL Server (MSSQLSERVER)"  `<br />/FAILOVERCLUSTERIPADDRESSES="IPv4;<2nd Sec Private Ip node1>;Cluster Network 1;<subnet mask>"  `<br />/FAILOVERCLUSTERNETWORKNAME="<Fail over cluster Network Name>"  `<br />/INSTANCEDIR="C:\Program Files\Microsoft SQL Server"  `<br />/ENU="True"  `<br />/ERRORREPORTING=0  `<br />/SQMREPORTING=0  `<br />/SAPWD="<Domain User password>" `<br />/SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS"  `<br />/SQLSYSADMINACCOUNTS="<domain\username>" `<br />/SQLSVCACCOUNT="<domain\username>"  /SQLSVCPASSWORD="<Domain User password>" `<br />/AGTSVCACCOUNT="<domain\username>"  /AGTSVCPASSWORD="<Domain User password>" `<br />/ISSVCACCOUNT="<domain\username>" /ISSVCPASSWORD="<Domain User password>"  `<br />/FTSVCACCOUNT="NT Service\MSSQLFDLauncher"  `<br />/INSTALLSQLDATADIR="\\<FSX DNS name>\share\Program Files\Microsoft SQL Server"  `<br />/SQLUSERDBDIR="\\<FSX DNS name>\share\data"  `<br />/SQLUSERDBLOGDIR="\\<FSX DNS name>\share\log" `<br />/SQLTEMPDBDIR="T:\tempdb"  `<br />/SQLTEMPDBLOGDIR="T:\log"  `<br />/SQLBACKUPDIR="\\<FSX DNS name>\share\SQLBackup" `<br />/SkipRules=Cluster_VerifyForErrors `<br />/INDICATEPROGRESS</pre> | AWS DevOps, DBA, DevOps engineer | 
| Add a secondary node to the cluster. | To add SQL Server to the secondary node (node 2), run the following  PowerShell command.<pre>D:\setup.exe /Q  `<br />/ACTION=AddNode `<br />/IACCEPTSQLSERVERLICENSETERMS `<br />/INSTANCENAME="MSSQLSERVER"  `<br />/FAILOVERCLUSTERGROUP="SQL Server (MSSQLSERVER)" `<br />/FAILOVERCLUSTERIPADDRESSES="IPv4;<2nd Sec Private Ip node2>;Cluster Network 2;<subnet mask>" `<br />/FAILOVERCLUSTERNETWORKNAME="<Fail over cluster Network Name>" `<br />/CONFIRMIPDEPENDENCYCHANGE=1 `<br />/SQLSVCACCOUNT="<domain\username>"  /SQLSVCPASSWORD="<Domain User password>" `<br />/AGTSVCACCOUNT="domain\username>"  /AGTSVCPASSWORD="<Domain User password>" `<br />/FTSVCACCOUNT="NT Service\MSSQLFDLauncher" `<br />/SkipRules=Cluster_VerifyForErrors `<br />/INDICATEPROGRESS</pre> | AWS DevOps, DBA, DevOps engineer | 
| Test the SQL Server FCI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | DBA, DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources. | To clean up the resources, use the AWS CloudFormation stack deletion process:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html)After the stack deletion is complete, the stacks will be in the `DELETE_COMPLETE` state. Stacks in the `DELETE_COMPLETE` state aren’t displayed in the CloudFormation console by default. To display deleted stacks, you must change the stack view filter as described in [Viewing deleted stacks on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-deleted-stacks.html).If the deletion failed, a stack will be in the `DELETE_FAILED` state. For solutions, see [Delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails) in the CloudFormation documentation. | AWS DevOps, DBA, DevOps engineer | 

## Troubleshooting
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS CloudFormation template failure | If the CloudFormation template fails during deployment, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | 
| AWS Managed Microsoft AD  join failure | To troubleshoot the join issues, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.html) | 

## Related resources
<a name="set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx-resources"></a>
+ [Simplify your Microsoft SQL Server high availability deployments using Amazon FSx for Windows File Server](https://aws.amazon.com/blogs/storage/simplify-your-microsoft-sql-server-high-availability-deployments-using-amazon-fsx-for-windows-file-server/)
+ [Using FSx for Windows File Server with Microsoft SQL Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/sql-server.html)

# Use BMC Discovery queries to extract migration data for migration planning
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning"></a>

*Ben Tailor-Hamblin, Emma Baldry, Simon Cunningham, and Shabnam Khan, Amazon Web Services*

## Summary
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-summary"></a>

This guide provides query examples and steps to help you extract data from your on-premises infrastructure and applications by using BMC Discovery. The pattern shows you how to use BMC Discovery queries to scan your infrastructure and extract software, service, and dependency information. The extracted data is required for the assess and mobilize phases of a large-scale migration to the Amazon Web Services (AWS) Cloud. You can use this data to make critical decisions about which applications to migrate together as part of your migration plan.

## Prerequisites and limitations
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-prereqs"></a>

**Prerequisites**
+ A license for BMC Discovery (formerly BMC ADDM) or the software as a service (SaaS) version of BMC Helix Discovery
+ On-premises or SaaS version of BMC Discovery, [installed](https://docs.bmc.com/docs/discovery/221/installing-1050933835.html) 
**Note**  
For on-premises versions of BMC Discovery, you must install the application on a client network with access to all networking and server devices that are in scope for a migration across multiple data centers. Access to the client network must be provided according to application installation instructions. If the scanning of Windows Server information is required, then you must set up a Windows proxy manager device in the network.
+ [Networking access](https://docs.bmc.com/docs/discovery/221/network-ports-used-for-discovery-communications-1050933821.html) to allow the application to scan devices across data centers, if you’re using BMC Helix Discovery

**Product versions**
+ BMC Discovery 22.2 (12.5)
+ BMC Discovery 22.1 (12.4)
+ BMC Discovery 21.3 (12.3)
+ BMC Discovery 21.05 (12.2)
+ BMC Discovery 20.08 (12.1)
+ BMC Discovery 20.02 (12.0)
+ BMC Discovery 11.3
+ BMC Discovery 11.2
+ BMC Discovery 11.1
+ BMC Discovery 11.0
+ BMC Atrium Discovery 10.2
+ BMC Atrium Discovery 10.1
+ BMC Atrium Discovery 10.0

## Architecture
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-architecture"></a>

The following diagram shows how asset managers can use BMC Discovery queries to scan BMC-modeled applications in both SaaS and on-premises environments.

![\[Architecture that uses BMC Discovery to extract software, service, and dependency information.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5e549882-8deb-4459-8891-e39bbf80e320/images/0ebb3e68-5828-45aa-86f4-c741c7b6cd94.jpeg)


The diagram shows the following workflow: An asset manager uses BMC Discovery or BMC Helix Discovery to scan database and software instances running on virtual servers hosted on multiple physical servers. The tool can model applications with components spanning multiple virtual and physical servers.

**Technology stack**
+ BMC Discovery
+ BMC Helix Discovery

## Tools
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-tools"></a>
+ [BMC Discovery](https://docs.bmc.com/xwiki/bin/view/IT-Operations-Management/Discovery/BMC-Discovery/) is a data center discovery tool that helps you automatically discover your data center.
+ [BMC Helix Discovery](https://www.bmc.com/it-solutions/bmc-helix-discovery.html) is a SaaS-based discovery and dependency modeling system that helps you dynamically model your data assets and their dependencies.

## Best practices
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-best-practices"></a>

It's a best practice to map application, dependency, and infrastructure data when you migrate to the cloud. Mapping helps you understand the complexity of your current environment and the dependencies among various components.

The asset information these queries provide is important for several reasons:

1. **Planning** – Understanding the dependencies between components helps you plan the migration process more effectively. For example, you might need to migrate certain components first in order to ensure that others can be migrated successfully.

1. **Risk assessment** – Mapping the dependencies between components can help you identify any potential risks or issues that can arise during the migration process. For example, you might discover that certain components rely on outdated or unsupported technologies that could cause issues in the cloud.

1. **Cloud architecture** – Mapping your application and infrastructure data can also help you to design a suitable cloud architecture that meets your organizational needs. For example, you might need to design a multi-tier architecture to support high availability or scalability requirements.

Overall, mapping application, dependency, and infrastructure data is a crucial step in the cloud migration process. The mapping exercise can help you better understand your current environment, identify any potential issues or risks, and design a suitable cloud architecture.

## Epics
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-epics"></a>

### Identify and evaluate discovery tooling
<a name="identify-and-evaluate-discovery-tooling"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify ITSM owners. | Identify the IT Service Management (ITSM) owners (usually by reaching out to the operational support teams). | Migration lead | 
| Check CMDB.  | Identify the number of configuration management databases (CMDBs) that contain asset information, and then identify the sources of that information. | Migration lead | 
| Identify discovery tools and check for use of BMC Discovery. | If your organization is using BMC Discovery to send data about your environment to the CMDB tool, check the scope and coverage of its scans. For example, check if BMC Discovery is scanning all data centers and if the access servers are located in perimeter zones. | Migration lead | 
| Check the level of application modelling. | Check if applications are modelled in BMC Discovery. If not, recommend the use of the BMC Discovery tool to model which running software instances provide an application and business service. | Migration engineer, Migration lead | 

### Extract infrastructure data
<a name="extract-infrastructure-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Extract data on physical and virtual servers. | To extract data on the physical and virtual servers scanned by BMC Discovery, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search Host show key as 'Serverid', virtual, name as 'HOSTNAME', os_type as 'osName', os_version as 'OS Version', num_logical_processors as 'Logical Processor Counts', cores_per_processor as 'Cores per Processor', logical_ram as 'Logical RAM', #Consumer:StorageUse:Provider:DiskDrive.size as 'Size'</pre>You can use extracted data to determine the appropriate instance sizes for migration. | Migration engineer, Migration lead | 
| Extract data on modeled applications. | If your applications are modeled in BMC Discovery, you can extract data about the servers that run the application software. To get the server names, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search SoftwareInstance show key as 'ApplicationID', #RunningSoftware:HostedSoftware:Host:Host.key as 'ReferenceID', type, name</pre>Applications are modeled in BMC Discovery by a collection of running software instances. The application is dependent on all the servers that run the application software. | BMC Discovery application owner | 
| Extract data on databases. | To get a list of all scanned databases and the servers these databases are running on, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search Database show key as 'Key', name, type as 'Source Engine Type', #Detail:Detail:ElementWithDetail:SoftwareInstance.name as 'Software Instance', #Detail:Detail:ElementWithDetail:SoftwareInstance.product_version as 'Product Version', #Detail:Detail:ElementWithDetail:SoftwareInstance.edition as 'Edition', #Detail:Detail:ElementWithDetail:SoftwareInstance.#RunningSoftware:HostedSoftware:Host:Host.key as 'ServerID'</pre> | App owner | 
| Extract data on server communication. | To get information on all the network communications between servers that’s collected by BMC Discovery from historic network communications logs, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search Host<br /> TRAVERSE InferredElement:Inference:Associate:DiscoveryAccess<br /> TRAVERSE DiscoveryAccess:DiscoveryAccessResult:DiscoveryResult:NetworkConnectionList<br /> TRAVERSE List:List:Member:DiscoveredNetworkConnection<br /> PROCESS WITH networkConnectionInfo</pre> | BMC Discovery application owner | 
| Extract data on application discovery. | To get information on application dependencies, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search SoftwareInstance show key as 'SRC App ID', #Dependant:Dependency:DependedUpon:SoftwareInstance.key as 'DEST App ID'</pre> | BMC Discovery application owner | 
| Extract data on business services. | To extract data on business services provided by hosts, use [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html) to run the following query:<pre>search Host show name, #Host:HostedSoftware:AggregateSoftware:BusinessService.name as 'Name'</pre> | BMC Discovery application owner | 

## Troubleshooting
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| A query fails to run or contains unpopulated columns. | Review the asset records in BMC Discovery and determine which fields you require. Then, replace these fields in the query by using the [Query Builder](https://docs.bmc.com/docs/discovery/221/query-builder-1051985747.html). | 
| The details of a dependent asset aren’t populated. | This is likely due to access permissions or network connectivity. The discovery tool might not have the necessary permissions to access certain assets, particularly if they are on different networks or in different environments.We recommend that you work closely with discovery subject matter experts to ensure that all relevant assets are identified. | 

## Related resources
<a name="use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning-resources"></a>

**References**
+ [BMC Discovery Licensing entitlement](https://docs.bmc.com/docs/discovery/bmc-discovery-licensing-entitlement-531336348.html) (BMC documentation)
+ [BMC Discovery features and components](https://docs.bmc.com/docs/discovery/221/bmc-discovery-features-and-components-1052418000.html) (BMC documentation)
+ [BMC Discovery User Guide](https://docs.bmc.com/xwiki/bin/view/IT-Operations-Management/Discovery/BMC-Discovery/) (BMC documentation)
+ [Searching for data (on BMC Discovery)](https://docs.bmc.com/docs/discovery/120/searching-for-data-911457232.html) (BMC documentation)
+ [Portfolio discovery and analysis for migration](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-portfolio-discovery/welcome.html) (AWS Prescriptive Guidance)

**Tutorials and videos**
+ [BMC Discovery: Webinar - Reporting Query Best Practices (Part 1)](https://www.youtube.com/watch?v=iwXy6x40kO8) (YouTube)