Migration Hub Import - AWS Application Discovery Service

Migration Hub Import

AWS Migration Hub (Migration Hub) import allows you to import details of your on-premises environment directly into Migration Hub without using the Application Discovery Service Agentless Collector (Agentless Collector) or AWS Application Discovery Agent (Discovery Agent), so you can perform migration assessment and planning directly from your imported data. You also can group your devices as applications and track their migration status.

To initiate an import request

  • Download the specially-formatted, comma separated value (CSV) import template.

  • Populate it with your existing on-premises server data.

  • Upload it to Migration Hub using the Migration Hub console, AWS CLI or one of the AWS SDKs.

You can submit multiple import requests. Each request is processed sequentially. You can check the status of your import requests at any time, through the console or import APIs.

After an import request is complete, you can view the details of individual imported records. View utilization data, tags, and application mappings directly from within the Migration Hub console. If errors were encountered during the import, you can review the count of successful and failed records, and you can see the error details for each failed record.

Handling errors: A link is provided to download the error log and failed records files as CSV files in a compressed archive. Use these files to resubmit your import request after correcting the errors.

Limits apply to the number of imported records, imported servers, and deleted records you can keep. For more information, see AWS Application Discovery Service Quotas.

Supported Import File Fields

Migration Hub import allows you to import data from any source. The data provided must be in the supported format for a CSV file, and the data must contain only the supported fields with the supported ranges for those fields.

An asterisk next to an import field name in the following table denotes that it is a required field. Each record of your import file must have at least one or more of those required fields populated to uniquely identify a server or application. Otherwise, a record without any of the required fields will fail to be imported.

Note

If you're using either VMware.MoRefId or VMWare.VCenterId, to identify a record, you must have both fields in the same record.

Import Field Name Description Examples
ExternalId* A custom identifier that allows you to mark each record as unique. For example, ExternalId can be the inventory ID for the server in your data center.

Inventory Id 1

Server 2

CMBD Id 3

SMBiosId System management BIOS (SMBIOS) ID.
IPAddress* A comma-delimited list of IP addresses of the server, in quotes.

192.0.0.2

"10.12.31.233, 10.12.32.11"

MACAddress* A comma-delimited list of MAC address of the server, in quotes.

00:1B:44:11:3A:B7

"00-15-E9-2B-99-3C, 00-14-22-01-23-45"

HostName* The host name of the server. We recommend using the fully qualified domain name (FQDN) for this value.

ip-1-2-3-4

localhost.domain

VMware.MoRefId* The managed object reference ID. Must be provided with a VMware.VCenterId.
VMware.VCenterId* Virtual machine unique identifier. Must be provided with a VMware.MoRefId.
CPU.NumberOfProcessors The number of CPUs.

4

CPU.NumberOfCores The total number of physical cores. 8
CPU.NumberOfLogicalCores The total number of threads that can run concurrently on all CPUs in a server. Some CPUs support multiple threads to run concurrently on a single CPU core. In those cases, this number will be larger than the number of physical (or virtual) cores.

16

OS.Name The name of the operating system.

Linux

Windows.Hat

OS.Version The version of the operating system.

16.04.3

NT 6.2.8

VMware.VMName The name of the virtual machine.

Corp1

RAM.TotalSizeInMB The total RAM available on the server, in MB.

64

128

RAM.UsedSizeInMB.Avg The average amount of used RAM on the server, in MB.

64

128

RAM.UsedSizeInMB.Max The maximum amount of used RAM available on the server, in MB.

64

128

CPU.UsagePct.Avg The average CPU utilization when the discovery tool was collecting data.

45

23.9

CPU.UsagePct.Max The maximum CPU utilization when the discovery tool was collecting data.

55.34

24

DiskReadsPerSecondInKB.Avg The average number of disk reads per second, in KB.

1159

84506

DiskWritesPerSecondInKB.Avg The average number of disk writes per second, in KB.

199

6197

DiskReadsPerSecondInKB.Max The maximum number of disk reads per second, in KB.

37892

869962

DiskWritesPerSecondInKB.Max The maximum number of disk writes per second, in KB.

18436

1808

DiskReadsOpsPerSecond.Avg The average number of disk read operations per second.

45

28

DiskWritesOpsPerSecond.Avg The average number of disk write operations per second.

8

3

DiskReadsOpsPerSecond.Max The maximum number of disk read operations per second.

1083

176

DiskWritesOpsPerSecond.Max The maximum number of disk write operations per second.

535

71

NetworkReadsPerSecondInKB.Avg The average number of network read operations per second, in KB.

45

28

NetworkWritesPerSecondInKB.Avg The average number of network write operations per second, in KB.

8

3

NetworkReadsPerSecondInKB.Max The maximum number of network read operations per second, in KB.

1083

176

NetworkWritesPerSecondInKB.Max The maximum number of network write operations per second, in KB.

535

71

Applications A comma-delimited list of applications that include this server, in quotes. This value can include existing applications and/or new applications that are created upon import.

Application1

"Application2, Application3"

Tags

A comma-delimited list of tags formatted as name:value.

Important

Do not store sensitive information (like personal data) in tags.

"zone:1, critical:yes"

"zone:3, critical:no, zone:1"

You can import data even if you don’t have data populated for all the fields defined in the import template, so long as each record has at least one of the required fields within it. Duplicates are managed across multiple import requests by using either an external or internal matching key. If you populate your own matching key, External ID, this field is used to uniquely identify and import the records. If no matching key is specified, import uses an internally generated matching key that is derived from some of the columns in the import template. For more information on this matching, see Matching logic for discovered servers and applications.

Note

Migration Hub import does not support any fields outside of those defined in the import template. Any custom fields supplied will be ignored and will not be imported.

Setting Up Your Import Permissions

Before you can import your data, ensure that your IAM user has the necessary Amazon S3 permissions to upload (s3:PutObject) your import file to Amazon S3, and to read the object (s3:GetObject). You also must establish programmatic access (for the AWS CLI) or console access, by creating an IAM policy and attaching it to the IAM user that performs imports in your AWS account.

Console Permissions

Use the following procedure to edit the permissions policy for the IAM user that will make import requests in your AWS account using the console.

To edit a user's attached managed policies
  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

  2. In the navigation pane, choose Users.

  3. Choose the name of the user whose permissions policy you want to change.

  4. Choose the Permissions tab and choose Add permissions.

  5. Choose Attach existing policies directly, and then choose Create policy.

    1. In the Create policy page that opens, choose JSON, and paste in the following policy. Remember to replace the name of your bucket with the actual name of the bucket that the IAM user will upload the import files into.

      { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::importBucket"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::importBucket/*"] } ] }
    2. Choose Review policy.

    3. Give your policy a new Name and optional description, before reviewing the summary of the policy.

    4. Choose Create policy.

  6. Return to the Grant permissions IAM console page for the user that will make import requests in your AWS account.

  7. Refresh the table of policies, and search for the name of the policy you just created.

  8. Choose Next: Review.

  9. Choose Add permissions.

Now that you've added the policy to your IAM user, you're ready to start the import process.

AWS CLI Permissions

Use the following procedure to create the managed policies necessary to give an IAM user the permissions to make import data requests using the AWS CLI.

To create and attach the managed policies
  1. Use the aws iam create-policy AWS CLI command to create an IAM policy with the following permissions. Remember to replace the name of your bucket with the actual name of the bucket that the IAM user will upload the import files into.

    { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::importBucket"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::importBucket/*"] } ] }

    For more information on using this command, see create-policy in the AWS CLI Command Reference.

  2. Use the aws iam create-policy AWS CLI command to create an additional IAM policy with the following permissions.

    { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "discovery:ListConfigurations", "discovery:CreateApplication", "discovery:UpdateApplication", "discovery:AssociateConfigurationItemsToApplication", "discovery:DisassociateConfigurationItemsFromApplication", "discovery:GetDiscoverySummary", "discovery:StartImportTask", "discovery:DescribeImportTasks", "discovery:BatchDeleteImportData" ], "Resource": "*" } ] }
  3. Use the aws iam attach-user-policy AWS CLI command to attach the policies you created in the previous two steps to the IAM user that will be performing import requests in your AWS account using the AWS CLI. For more information on using this command, see attach-user-policy in the AWS CLI Command Reference.

Now that you've added the policies to your IAM user, you're ready to start the import process.

Remember that when the IAM user uploads objects to the Amazon S3 bucket that you specified, they must leave the default permissions for the objects set so that the user can read the object.

Uploading Your Import File to Amazon S3

Next, you must upload your CSV formatted import file into Amazon S3 so it can be imported. Before you begin, you should have an Amazon S3 bucket that will house your import file created and/or chosen ahead of time.

Console S3 Upload
To upload your import file to Amazon S3
  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

  2. In the Bucket name list, choose the name of the bucket that you want to upload your object to.

  3. Choose Upload.

  4. In the Upload dialog box, choose Add files to choose the file to upload.

  5. Choose a file to upload, and then choose Open.

  6. Choose Upload.

  7. Once your file has been uploaded, choose the name of your data file object from your bucket dashboard.

  8. From the Overview tab of the object details page, copy the Object URL. You'll need this when you create your import request.

  9. Go to the Import page in the Migration Hub console as described in Importing Data. Then, paste the object URL in the Amazon S3 Object URL field.

AWS CLI S3 Upload
To upload your import file to Amazon S3
  1. Open a terminal window and navigate to the directory that your import file is saved to.

  2. Enter the following command:

    aws s3 cp ImportFile.csv s3://BucketName/ImportFile.csv
  3. This returns the following results:

    upload: .\ImportFile.csv to s3://BucketName/ImportFile.csv
  4. Copy the full Amazon S3 object path that was returned. You will need this when you create your import request.

Importing Data

After you download the import template from the Migration Hub console and populate it with your existing on-premises server data, you're ready to start importing the data into Migration Hub. The following instructions describe two ways to do this, either by using the console or by making API calls through the AWS CLI.

Console Import

Start data import on the Tools page of the Migration Hub console.

To start data import
  1. In the navigation pane, under Discover, choose Tools.

  2. If you don't already have an import template filled out, you can download the template by choosing import template in the Import box. Open the downloaded template and populate it with your existing on-premises server data. You can also download the import template from our Amazon S3 bucket at https://s3.us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv

  3. To open the Import page, choose Import in the Import box.

  4. Under Import name, specify a name for the import.

  5. Fill out the Amazon S3 Object URL field. To do this step, you'll need to upload your import data file to Amazon S3. For more information, see Uploading Your Import File to Amazon S3.

  6. Choose Import in the lower-right area. This will open the Imports page where you can see your import and its status listed in the table.

After following the preceding procedure to start your data import, the Imports page will show details of each import request including its progress status, completion time, and the number of successful or failed records with the ability to download those records. From this screen, you can also navigate to the Servers page under Discover to see the actual imported data.

On the Servers page, you can see a list of all the servers (devices) that are discovered along with the import name. When you navigate from the Imports (import history) page by selecting the name of the import listed in the Name column, you are taken to the Servers page where a filter is applied based on the selected import's data set. Then, you only see data belonging to that particular import.

The archive is in a .zip format, and contains two files; errors-file and failed-entries-file. The errors file contains a list of error messages associated with each failed line and associated column name from your data file that failed the import. You can use this file to quickly identify where problems occurred. The failed entries file includes each line and all the provided columns that failed. You can make the changes called out in the errors file in this file and attempt to import the file again with the corrected information.

AWS CLI Import

To start the data import process from the AWS CLI, the AWS CLI must first be installed in your environment. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

Note

If you don't already have an import template filled out, you can download the import template from our Amazon S3 bucket here: https://s3.us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv

To start data import
  1. Open a terminal window, and type the following command:

    aws discovery start-import-task --import-url s3://BucketName/ImportFile.csv --name ImportName
  2. This will create your import task, and return the following status information:

    { "task": { "status": "IMPORT_IN_PROGRESS", "applicationImportSuccess": 0, "serverImportFailure": 0, "serverImportSuccess": 0, "name": "ImportName", "importRequestTime": 1547682819.801, "applicationImportFailure": 0, "clientRequestToken": "EXAMPLE1-abcd-1234-abcd-EXAMPLE1234", "importUrl": "s3://BucketName/ImportFile.csv", "importTaskId": "import-task-EXAMPLE1229949eabfEXAMPLE03862c0" } }

Tracking Your Migration Hub Import Requests

You can track the status of your Migration Hub import requests using the console, AWS CLI, or one of the AWS SDKs.

Console Tracking

From the Imports dashboard in the Migration Hub console, you'll find the following elements.

  • Name – The name of the import request.

  • Import ID – The unique ID of the import request.

  • Import time – The date and time that the import request was created.

  • Import status – The status for the import request. This can be one of the following values:

    • Importing – This data file is currently being imported.

    • Imported – The entire data file was successfully imported.

    • Imported with errors – One or more of the records in the data file failed to import. To resolve your failed records, choose Download failed records for your import task and resolve the errors in the failed entries csv file, and do the import again.

    • Import Failed – None of the records in the data file where imported. To resolve your failed records, choose Download failed records for your import task and resolve the errors in the failed entries csv file, and do the import again.

  • Imported records – The number of records in a specific data file that were successfully imported.

  • Failed records – The number records in a specific data file that weren't imported.

CLI Tracking

You can track the status of your import tasks with the aws discovery describe-import-tasks AWS CLI command.

  1. Open a terminal window, and type the following command:

    aws discovery describe-import-tasks
  2. This will return a list of all your import tasks in JSON format, complete with status and other relevant information. Optionally, you can filter results to return a subset of your import tasks.

When tracking your import tasks, you may find that the serverImportFailure value returned is greater than zero. When this happens, your import file had one or more entries that couldn't be imported. This can be resolved by downloading your failed records archive, reviewing the files within, and doing another import request with the modified failed-entries.csv file.

After creating your import task, you can perform additional actions to help manage and track your data migration. For example, you can download an archive of failed records for a specific request. For information on using the failed records archive to resolve import issues, see Troubleshooting failed import records.