Migrating data from Apache Cassandra to Amazon DynamoDB
You can use an AWS SCT data extraction agent to extract data from Apache Cassandra and migrate it to Amazon DynamoDB. The agent runs on an Amazon EC2 instance, where it extracts data from Cassandra, writes it to the local file system, and uploads it to an Amazon S3 bucket. You can then use AWS SCT to copy the data to DynamoDB.
Amazon DynamoDB is a NoSQL database service. To store data in DynamoDB, you create database tables and then upload data to those tables. The AWS SCT extraction agent for Cassandra automates the process of creating DynamoDB tables that match their Cassandra counterparts, and then populating those DynamoDB tables with data from Cassandra.
The process of extracting data can add considerable overhead to a Cassandra cluster. For this reason, you don't run the extraction agent directly against your production data in Cassandra. To avoid interfering with production applications, AWS SCT helps you create a clone data center—a standalone copy of the Cassandra data that you want to migrate to DynamoDB. The agent can then read data from the clone and make it available to AWS SCT, without affecting your production applications.
When the data extraction agent runs, it reads data from the clone data center and writes it to an Amazon S3 bucket. AWS SCT then reads the data from Amazon S3 and writes it to DynamoDB.
The following diagram shows the supported scenario.

If you are new to Cassandra, be aware of the following important terminology:
-
A node is a single computer (physical or virtual) running the Cassandra software.
-
A server is a logical entity composed of up to 256 nodes.
-
A rack represents one or more servers.
-
A data center is a collection of racks.
-
A cluster is a collection of data centers.
For more information, go to the Wikipedia page
Use the information in the following topics to learn how to migrate data from Apache Cassandra to DynamoDB:
Topics
Prerequisites for migrating from Cassandra to DynamoDB
Before you begin, you will need to perform several pre-migration tasks, as described in this section.
Topics
Supported Cassandra versions
AWS SCT supports the following Apache Cassandra versions:
-
3.11.2
-
3.1.1
-
3.0
-
2.1.20
Other versions of Cassandra aren't supported.
Amazon S3 settings
When the AWS SCT data extraction agent runs, it reads data from your clone data center and writes it to an Amazon S3 bucket. Before you continue, you must provide the credentials to connect to your AWS account and your Amazon S3 bucket. You store your credentials and bucket information in a profile in the global application settings, and then associate the profile with your AWS SCT project. If necessary, choose Global Settings to create a new profile. For more information, see Storing AWS service profiles in AWS SCT.
Amazon EC2 instance for clone data center
As part of the migration process, you'll need to create a clone of an existing Cassandra data center. This clone will run on an Amazon EC2 instance that you provision in advance. The instance will run a standalone Cassandra installation, for hosting your clone data center independently of your existing Cassandra data center.
The new Amazon EC2 instance must meet the following requirements:
-
Operating system: either Ubuntu or CentOS.
-
Must have Java JDK 8 installed. (Other versions are not supported.)
To launch a new instance, go to the Amazon EC2 Management Console at
https://console.aws.amazon.com/ec2/
Security settings
AWS SCT communicates with the data extraction agent using Secure Sockets Layer (SSL). To enable SSL, set up a trust store and key store:
-
Launch AWS SCT.
-
From the Settings menu, choose Global Settings.
-
Choose the Security tab as shown following.
-
Choose Generate Trust and Key Store, or choose Select existing Trust and Key Store.
If you choose Generate Trust and Key Store, you then specify the name and password for the trust and key stores, and the path to the location for the generated files. You use these files in later steps.
If you choose Select existing Trust and Key Store, you then specify the password and file name for the trust and key stores. You use these files in later steps.
-
After you have specified the trust store and key store, choose OK to close the Global Settings dialog box.
Configure your source OS user
To access your source database, create an OS user on a Cassandra node that is running on Linux.
To create a new OS user
-
Create a new user called
sct_extractor
and set the home directory for this user.sudo useradd -s /bin/bash -m -d /home/sct_extractor sct_extractor
-
Add this user to Sudoers.
sudo bash -c "cat << EOF >> /etc/sudoers.d/cassandra-users sct_extractor ALL=(ALL) NOPASSWD: ALL EOF"
-
Set the password for your user.
sudo passwd sct_extractor
-
Create the authorized keys file.
sudo touch /home/sct_extractor/.ssh/authorized_keys
-
Add your user to the
root
andcassandra
groups.For package Cassandra installations, use the following command.
sudo usermod -aG [ec2-user|ubuntu|centos],root,cassandra sct_extractor
If you install Cassandra with the binary tarball file, use the following command.
sudo usermod -aG sudo usermod -aG [ec2-user|ubuntu|centos],root sct_extractor
-
Add the following permissions.
sudo chown -R sct_extractor:sct_extractor /home/sct_extractor sudo chown -R [ec2-user|ubuntu|centos]:[ec2-user|ubuntu|centos] /home/[ec2-user|ubuntu|centos] sudo chmod 750 -R /home/sct_extractor sudo chmod 750 -R /home/[ec2-user|ubuntu|centos] Where: [ec2-user|ubuntu|centos] - OS user
-
Generate an RSA key.
su - sct_extractor sudo ssh-keygen -a 1000 -b 4096 -C "" -E sha256 -o -t rsa -f /home/sct_extractor/.ssh/id_rsa -N 'cassandra' cat /home/sct_extractor/.ssh/id_rsa.pub >> /home/sct_extractor/.ssh/authorized_keys
Download the generated key.
Configure your source database user
To migrate data from your source database, configure your Cassandra user.
To configure your Cassandra user
-
Edit the
cassandra.yaml
file in all nodes of your Cassandra cluster and change the properties as shown following.authorizer : CassandraAuthorizer authenticator : PasswordAuthenticator
-
Restart your Cassandra cluster.
sudo service cassandra restart
-
Start
cqlsh
using the default superuser name and password.cqlsh -p cassandra -u cassandra cqlsh> CREATE USER IF NOT EXISTS min_privs WITH PASSWORD 'min_privs' NOSUPERUSER;
Configure your target database user
Before you migrate data to your target Amazon DynamoDB database, configure the required IAM resources.
To configure IAM resources
-
Create an IAM policy that provides access to your Amazon DynamoDB database. Make sure that your IAM policy includes the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "SidTables", "Effect": "Allow", "Action": [ "dynamodb:*" ], "Resource": "arn:aws:dynamodb:*:*:table/*" }, { "Sid": "SidIndexes", "Effect": "Allow", "Action": [ "dynamodb:Scan", "dynamodb:Query" ], "Resource": "arn:aws:dynamodb:*:*:table/*/index/*" }, { "Sid": "SidAllResources", "Effect": "Allow", "Action": [ "dynamodb:DescribeLimits", "dynamodb:DescribeReservedCapacity", "dynamodb:DescribeReservedCapacityOfferings", "dynamodb:ListTagsOfResource", "dynamodb:DescribeTimeToLive" ], "Resource": "*" }, { "Sid": "SidAll", "Effect": "Allow", "Action": [ "dynamodb:ListTables" ], "Resource": "*" } ] }
-
Create an IAM policy that provides access to your Amazon S3 bucket. Make sure that your IAM policy includes the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "MinPrivsS3", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListBucket", "s3:DeleteObject" ], "Resource": "*" } ] }
-
Create an IAM policy that provides access to AWS DMS. Make sure that your IAM policy includes the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "dms:*", "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:ListAliases", "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:GetRole", "iam:PassRole", "iam:CreateRole", "iam:AttachRolePolicy", "iam:SimulatePrincipalPolicy", "iam:GetUser" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:DescribeVpcs", "ec2:DescribeInternetGateways", "ec2:DescribeAvailabilityZones", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:ModifyNetworkInterfaceAttribute", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "cloudwatch:Get*", "cloudwatch:List*" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:FilterLogEvents", "logs:GetLogEvents" ], "Resource": "*" }, { "Sid": "autoscaling", "Action": [ "application-autoscaling:DeleteScalingPolicy", "application-autoscaling:DeregisterScalableTarget", "application-autoscaling:DescribeScalableTargets", "application-autoscaling:DescribeScalingActivities", "application-autoscaling:DescribeScalingPolicies", "application-autoscaling:PutScalingPolicy", "application-autoscaling:RegisterScalableTarget", "cloudwatch:DeleteAlarms", "cloudwatch:DescribeAlarmHistory", "cloudwatch:DescribeAlarms", "cloudwatch:DescribeAlarmsForMetric", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "cloudwatch:PutMetricAlarm" ], "Effect": "Allow", "Resource": "*" }, { "Action": [ "iam:PassRole" ], "Effect": "Allow", "Resource": "*", "Condition": { "StringLike": { "iam:PassedToService": [ "application-autoscaling.amazonaws.com" ] } } } ] }
-
Create an IAM role that allows AWS DMS to assume and grant access to your target DynamoDB tables. The minimum set of access permissions is shown in the following IAM policy.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "dms.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
Attach all three IAM policies that you created previously to this IAM role. To create the AWS DMS endpoint, use this role.
Create a new AWS SCT project
After you have performed the steps in Prerequisites for migrating from Cassandra to DynamoDB, you're ready to create a new AWS SCT project for your migration. Follow these steps:
To create a new AWS SCT project using Apache Cassandra as a source and DynamoDB as a target
-
In AWS SCT, choose Add source.
-
Choose Cassandra, then choose Next.
The Add source dialog box appears.
-
Provide the Apache Cassandra source database connection information. Use the instructions in the following table.
For this parameter Do this Connection name Enter a name for your database. AWS SCT displays this name in the tree in the left panel. Server name Enter the Domain Name Service (DNS) name or IP address of your source database server.
Server port Enter the port used to connect to your source database server.
User name and Password Enter the user name and password to connect to your source database server.
AWS SCT uses the password to connect to your source database only when you choose to connect to your database in a project. To guard against exposing the password for your source database, AWS SCT doesn't store the password by default. If you close your AWS SCT project and reopen it, you are prompted for the password to connect to your source database as needed.
Use SSL Choose this option if you want to use Secure Sockets Layer (SSL) to connect to your database. Provide the following additional information, as appropriate, on the SSL tab:
-
Trust store: The trust store to use.
-
Key store: The key store to use.
Store password AWS SCT creates a secure vault to store SSL certificates and database passwords. By turning this option on, you can store the database password and connect quickly to the database without having to enter the password.
-
-
Choose Test Connection to verify that AWS SCT can connect to your source database.
-
Choose Connect to connect to your source database.
-
In AWS SCT, choose Add target.
-
Choose Amazon DynamoDB, then choose Next.
The Add target dialog box appears.
-
Provide the DynamoDB target database connection information. Use the instructions in the following table.
For this parameter Do this Connection name Enter a name for your database. AWS SCT displays this name in the tree in the right panel. Copy from AWS profile
If you've already set up your AWS credentials, choose the name of an existing profile.
AWS access key
Enter your AWS access key. AWS secret key
Enter the secret key associated with your AWS access key. Region
Choose an AWS Region. AWS SCT will migrate your data to DynamoDB in that Region. -
Choose Test Connection to verify that AWS SCT can connect to your target database.
-
Choose Connect to connect to your target database.
-
Create a new mapping rule that describes a source-target pair that includes your Apache Cassandra source database and a target DynamoDB database. For more information, see Adding a new mapping rule.
Create a clone data center
To avoid interfering with production applications that use your Cassandra cluster, AWS SCT will create a clone data center and copy your production data into it. The clone data center acts as a staging area, so that AWS SCT can perform further migration activities using the clone rather than your production data center.
To begin the cloning process, follow this procedure:
-
In the AWS SCT window, on the left-hand side (source), expand the Datacenters node and choose one of your existing Cassandra data centers.
-
From the Actions menu, choose Clone Datacenter for Extract.
-
Read the introductory text, and then choose Next to continue.
-
In the Clone Datacenter for Extract window, add the following information:
For this parameter Do this Private IP:SSH port
Enter the private IP address and SSH port for any of the nodes in your Cassandra cluster. For example: 172.28.37.102:22
Public IP:SSH port
Enter the public IP address and SSH port for the node. For example: 41.184.48.27:22
OS User
Type a valid user name for connecting to the node.
OS Password
Type the password associated with the user name.
Key path If you have an SSH private key (.pem file) for this node, choose Browse to navigate to the location where the private key is stored.
Passphrase If your SSH private key is protected by a passphrase, type it here.
JMX user Enter the JMX user name for accessing your Cassandra cluster.
JMX password Type the password associated with the JMX user.
Choose Next to continue. AWS SCT connects to the Cassandra node, where it runs the
nodetool status
command. -
In the Source Cluster Parameters window, accept the default values, and choose Next to continue.
-
In the Node Parameters window, verify the connection details for all of the nodes in the source cluster. AWS SCT will fill in some of these details automatically; however, you must supply any missing information.
Note
Instead of entering all of the data here, you can bulk-upload it instead. To do this, choose Export to create a .csv file. You can then edit this file, adding a new line for each node in your cluster. When you are done, choose Upload. AWS SCT will read the .csv file and use it to populate the Node parameters window.
Choose Next to continue. AWS SCT verifies that the node configuration is valid.
-
In the Configure Target Datacenter window, review the default values. In particular, note the Datacenter suffix field: When AWS SCT creates your clone data center, it will be named similarly to the source data center, but with the suffix that you provide. For example, if the source data center is named
my_datacenter
, then a suffix of_tgt
would cause the clone to be namedmy_datacenter_tgt
. -
While still in the Configure Target Datacenter window, choose Add new node:
-
In the Add New Node window, add the information needed to connect to the Amazon EC2 instance that you created in Amazon EC2 instance for clone data center.
When the settings are as you want them, choose Add. The node appears in the list:
-
Choose Next to continue. The following confirmation box appears:
Choose OK to continue. AWS SCT reboots your source data center, one node at a time.
-
Review the information in the Datacenter Synchronization window. If your cluster is running Cassandra version 2, then AWS SCT copies all of the data to the clone data center. If your cluster is running Cassandra version 3, then you can choose which keyspace or keyspaces you want to copy to the clone data center.
When you are ready to begin replicating data to your clone data center, choose Start.
Data replication will begin immediately. AWS SCT displays a progress bar so that you can monitor the replication process. Note that replication can take a long time, depending on how much data is in the source data center. If you need to cancel the operation before it's fully complete, choose Cancel.
When the replication is complete, choose Next to continue.
-
In the Summary window, AWS SCT displays a report showing the state of your Cassandra cluster, along with next steps.
Review the information in this report, and then choose Finish to complete the wizard.
Install, configure, and run the data extraction agent
Now that you have a clone of your data center, you are ready to begin using the AWS SCT data extraction agent for Cassandra. This agent is available as part of the AWS SCT distribution (for more information, see Installing, verifying, and updating AWS SCT).
Note
We recommend that you run the agent on an Amazon EC2 instance. The Amazon EC2 instance must meet the following requirements:
Operating system: either Ubuntu or CentOS.
-
8 virtual CPUs, at a minimum.
-
At least 16GB of RAM.
If you don't already have an Amazon EC2 instance that meets these requirements, go to
the Amazon EC2 Management Console (https://console.aws.amazon.com/ec2/
Follow this procedure to install, configure, and run the AWS SCT data extraction agent for Cassandra:
-
Log in to your Amazon EC2 instance.
Verify that you are running Java 1.8.x is installed:
java -version
-
Install the
sshfs
package:sudo yum install sshfs
-
Install the
expect
package:sudo yum install expect
-
Edit the
/etc/fuse.conf
file, and uncomment the stringuser_allow_other
:# mount_max = 1000 user_allow_other
-
The AWS SCT data extraction agent for Cassandra is available as part of the AWS SCT distribution (for more information, see Installing, verifying, and updating AWS SCT). You can find the agent in the .zip file that contains the AWS SCT installer file, in the
agents
directory. The following builds of the agent are available.File name Operating system aws-cassandra-extractor-
n.n.n
.debUbuntu
aws-cassandra-extractor-
n.n.n
.x86_64.rpmCentOS
Choose the file that's appropriate for your Amazon EC2 instance. Use the scp utility to upload that file to your Amazon EC2 instance.
-
Install the AWS SCT data extraction agent for Cassandra. (Replace
with the build number.)n.n.n
For Ubuntu:
sudo dpkg -i aws-cassandra-extractor-
n.n.n
.deb-
For CentOS:
sudo yum install aws-cassandra-extractor-
n.n.n
.x86_64.rpm
During the installation process, you'll be asked to select the Cassandra version you want to work with. Choose version 3 or 2, as appropriate.
After the installation completes, review the following directories to ensure that they were created successfully:
-
/var/log/cassandra-data-extractor/
—for extraction agent logs. -
/mnt/cassandra-data-extractor/
—for mounting home and data folders. -
/etc/cassandra-data-extractor/
—for the agent configuration file (agent-settings.yaml
).
-
-
To enable the agent to communicate with AWS SCT, you must have a key store and a trust store available. (You created these in Security settings.) Use the
scp
utility to upload these files to your Amazon EC2 instance.The configuration utility (see next step) requires you to specify the key store and trust store, so you need to have them available.
-
Run the configuration utility:
sudo java -jar /usr/share/aws-cassandra-extractor/aws-cassandra-extractor.jar --configure
The utility will prompt you for several configuration values. You can use the following example as a guide, while substituting your own values:
Enter the number of data providers nodes [1]: 1 Enter IP for Cassandra node 1: 34.220.73.140 Enter SSH port for Cassandra node <34.220.73.140> [22]: 22 Enter SSH login for Cassandra node <34.220.73.140> : centos Enter SSH password for Cassandra node <34.220.73.140> (optional): Is the connection to the node using a SSH private key? Y/N [N] : Y Enter the path to the private SSH key for Cassandra node <34.220.73.140>: /home/centos/my-ec2-private-key.pem Enter passphrase for SSH private key for Cassandra node <34.220.73.140> (optional): Enter the path to the cassandra.yaml file location on the node <34.220.73.140>: /etc/cassandra/conf/ Enter the path to the Cassandra data directories on the node <34.220.73.140>: /u01/cassandra/data ===== Mounting process started ===== Node [34.220.73.140] mounting started. Will be executed command: sudo sshfs ubuntu@34.220.73.140:/etc/cassandra/ /mnt/aws-cassandra-data-extractor/34.220.73.140_node/conf/ -p 22 -o allow_other -o StrictHostKeyChecking=no -o IdentityFile=/home/ubuntu/dbbest-ec2-oregon_s.pem > /var/log/aws-cassandra-data-extractor/dmt-cassandra-v3/conf_34.220.73.140.log 2>&1 Will be executed command: sudo sshfs ubuntu@34.220.73.140:/u01/cassandra/data/ /mnt/aws-cassandra-data-extractor/34.220.73.140_node/data/data -p 22 -o allow_other -o StrictHostKeyChecking=no -o IdentityFile=/home/ubuntu/dbbest-ec2-oregon_s.pem > /var/log/aws-cassandra-data-extractor/dmt-cassandra-v3/data_34.220.73.140.log 2>&1 ===== Mounting process was over ===== Enable SSL communication Y/N [N] : Y Path to key store: /home/centos/Cassandra_key Key store password:123456 Re-enter the key store password:123456 Path to trust store: /home/centos/Cassandra_trust Trust store password:123456 Re-enter the trust store password:123456 Enter the path to the output local folder: /home/centos/out_data === Configuration aws-agent-settings.yaml successful completed === If you want to add new nodes or change it parameters, you should edit the configuration file /etc/aws-cassandra-data-extractor/dmt-cassandra-v3/aws-agent-settings.yaml
Note
When the configuration utility has completed, you might see the following message:
Change the SSH private keys permission to 600 to secure them. You can also set permissions to 400.
You can use the
chmod
command to change the permissions, as in this example:chmod 400 /home/centos/my-ec2-private-key.pem
After the configuration utility completes, review the following directories and files:
-
/etc/cassandra-data-extractor/agent-settings.yaml
—the settings file for the agent. -
$HOME/out_data
—a directory for extraction output files. -
/mnt/cassandra-data-extractor/34.220.73.140_node/conf
—an empty Cassandra home folder. (Replace34.220.73.140
with your actual IP address.) -
/mnt/cassandra-data-extractor/34.220.73.140_node/data/data
—an empty Cassandra data file. (Replace34.220.73.140
with your actual IP address.)
If these directories aren't mounted, use the following command to mount them:
sudo java -jar /usr/share/aws-cassandra-extractor/aws-cassandra-extractor.jar -mnt
-
-
Mount the Cassandra home and data directories:
sudo java -jusr/share/cassandra-extractor/rest-extraction-service.jar -mnt
After the mounting process is complete, review the Cassandra home folder and Cassandra data file directory as shown in the following example. (Replace
34.220.73.140
with your actual IP address.)ls -l /mnt/cassandra-data-extractor/34.220.73.140_node/conf ls -l /mnt/cassandra-data-extractor/34.220.73.140_node/data/data
-
Start the AWS SCT data extraction agent for Cassandra:
sudo systemctl start aws-cassandra-extractor
Note
By default, the agent runs on port 8080. You can change this by editing the
agent-settings.yaml
file.
Migrate data from the clone data center to Amazon DynamoDB
You are now ready to perform the migration from the clone data center to Amazon DynamoDB, using AWS SCT. AWS SCT manages the workflows among the AWS SCT data extraction agent for Cassandra, AWS Database Migration Service (AWS DMS), and DynamoDB You perform the migration process entirely within the AWS SCT interface, and AWS SCT manages all of the external components on your behalf.
To migrate your data, follow this procedure:
To migrate your data from the clone data center to DynamoDB
-
From the View menu, choose Data migration view.
-
Choose the Agents tab.
-
If you haven't yet registered the AWS SCT data extraction agent, you'll see the following message:
Choose Register.
-
In the New agent registration window, add the following information:
For this parameter Do this Description
Type a short description for this agent. Host name
Enter the hostname of the Amazon EC2 instance you used for Install, configure, and run the data extraction agent Port
Enter the port number for the agent. (The default port number is 8080
.)Password
If you are using SSL, leave this field blank; otherwise, type the password for logging into the host. Use SSL
If you are using SSL, choose this option to activate the SSL tab. If you are using SSL, choose the SSL tab and add the following information:
For this parameter Do this Trust store
Choose the trust store you configured in Install, configure, and run the data extraction agent. Key store
Choose the key store you configured in Install, configure, and run the data extraction agent. When the settings are as you want them, choose Register. AWS SCT will attempt to connect with the AWS SCT data extraction agent for Cassandra.
-
On the left side of the AWS SCT window, choose the Cassandra data center that you created in Create a clone data center.
-
From the Actions menu, choose Create Local & DMS Task.
-
In the Create Local & DMS Task window, enter the following information:
For this parameter Do this Task name
Type a short name for the AWS DMS task to be created. Replication instance
Choose the AWS DMS replication instance that you want to use. Migration type
Choose Migrate existing data and replication ongoing changes. This will migrate the tables in your Cassandra clone data center to DynamoDB, and then capture all ongoing changes. This process is called full load and CDC in AWS DMS. Target table preparation mode
If you already have corresponding tables in DynamoDB and want to delete them prior to migration, choose Drop tables on target. Otherwise, leave the setting at its default value (Do nothing). IAM role
Choose the predefined IAM role that has permissions to access the Amazon S3 bucket and the target database (Amazon DynamoDB). For more information about the permissions required to access an Amazon S3 bucket, see Amazon S3 settings. Logging level
Choose an appropriate logging level for the migration task. Description
Type a description for the task. Data encryption
Choose either Enable or Disable. Delete files from the local directory
Choose this option to delete data files from the agent's local directory after it loads the files to Amazon S3. S3 bucket
Enter the name of an Amazon S3 bucket for which you have write privileges. When the settings are as you want them, choose Create.
-
Choose the Tasks tab, where you should see the task you created. To start the task choose Start.
You can monitor the task progress, as shown in the following screenshot:
Post-migration activities
If you are finished with the migration and want to delete the migration task, do the following:
-
Choose the Tasks tab.
-
If your task is currently running, choose Stop.
-
To delete the task, choose Delete.
If you no longer need to use the AWS SCT data extraction agent for Cassandra, do the following:
-
Choose the Agents tab.
-
Choose the agent you no longer need.
-
Choose Unregister.