Troubleshooting issues with AWS DataSync transfers
The following topics describe issues common to AWS DataSync locations and tasks and how you can resolve them.
How do I configure DataSync to use a specific NFS or SMB version to mount my file share?
For locations that support Network File System (NFS) or Server Message Block (SMB), DataSync by default chooses the protocol version for you. You can also specify the version yourself by using the DataSync console or API.
Action to take (DataSync console)
When creating your NFS or SMB location, configure the protocol version that you want DataSync to use. For more information, see Configuring AWS DataSync transfers with an NFS file server or Configuring AWS DataSync transfers with an SMB file server).
Action to take (DataSync API)
When creating or updating your NFS or SMB location, specify the
Version
parameter. For example, see CreateLocationNfs or
CreateLocationSmb.
The following example AWS CLI command creates an NFS location that DataSync mounts by using NFS version 4.0.
aws datasync create-location-nfs --server-hostname
your-server-address
\ --on-prem-config AgentArns=your-agent-arns
\ --subdirectorynfs-export-path
\ --mount-options Version="NFS4_0"
The following example AWS CLI command creates an SMB location that DataSync mounts by using SMB version 3.
aws datasync create-location-smb --server-hostname
your-server-address
\ --on-prem-config AgentArns=your-agent-arns
\ --subdirectorysmb-export-path
\ --mount-options Version="SMB3"
Error: Invalid SyncOption value. Option:
TransferMode,PreserveDeletedFiles, Value: ALL,REMOVE.
This error occurs when you're creating or editing your DataSync task and you select the Transfer all data option and deselect the Keep deleted files option. When you transfer all data, DataSync doesn't scan your destination location and doesn't know what to delete.
My task keeps failing with an
EniNotFound
error
This error occurs if you delete one of your task's network interfaces in your virtual private cloud (VPC). If your task is scheduled or queued, the task will fail if it's missing a network interface required to transfer your data.
Actions to take
You have the following options to work around this issue:
-
Manually restart the task. When you do this, DataSync will create any missing network interfaces it needs to run the task.
-
If you need to clean up resources in your VPC, make sure that you don't delete network interfaces related to a DataSync task that you're still using.
To see the network interfaces allocated to your task, do one of the following:
-
Use the DescribeTask operation. You can view the network interfaces in the
SourceNetworkInterfaceArns
andDestinationNetworkInterfaceArns
response elements. -
In the Amazon EC2 console, search for your task ID (such as
task-f012345678abcdef0
) to find its network interfaces.
-
-
Consider not running your tasks automatically. This could include disabling task queueing or scheduling (through DataSync or custom automation).
My task failed with an NFS permissions denied error
You can get a "permissions denied" error message if you configure your NFS file
server with root_squash
or all_squash
and your files don't
all have read access.
Action to take
To fix this issue, configure your NFS export with no_root_squash
or make sure that the permissions for all of the files that you want to transfer
allow read access for all users.
For DataSync to access directories, you must also enable all-execute access. To make sure that the directory can be mounted, first connect to any computer that has the same network configuration as your agent. Then run the following CLI command:
mount -t nfs -o
nfsvers=<
your-nfs-server-version
>
<your-nfs-server-name
>:<nfs-export-path-you-specified
>
<new-test-folder-on-your-computer
>
If the issue still isn't resolved, contact AWS Support Center
My task failed with an NFS mount error
You might see the following error when running a DataSync task that involves an NFS file server location:
Task failed to access location loc-1111222233334444a: x40016:
mount.nfs: Connection timed out
Actions to take
Do the following until the error is resolved.
-
Make sure that the NFS file server and export that you specify in your DataSync location are valid. If they aren't, delete your location and task, then create a new location and task that use a valid NFS file server and export. For more information, see Using the DataSync console.
-
Check your firewall configuration between your agent and NFS file server. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage.
-
Make sure that your agent can access the NFS file server and mount the export. For more information, see Providing DataSync access to NFS file servers.
-
If you still see the error, open a support channel with AWS Support. For more information, see I don't know what's going on with my agent. Can someone help me?.
My task failed with an Amazon EFS mount error
You might see the following error when running a DataSync task that involves an Amazon EFS location:
Task failed to access location loc-1111222233334444a: x40016: Failed to
connect to EFS mount target with IP: 10.10.1.0.
This can happen if the Amazon EFS file system's mount path that you configure with your location gets updated or deleted. DataSync isn't aware of these changes in the file system.
Action to take
Delete your location and task and create a new Amazon EFS location with the new mount path.
File ownership isn't maintained with NFS transfer
After your transfer, you might notice that the files in your DataSync destination
location have different user IDs (UIDs) or group IDs (GIDs) than the same files in
your source location. For example, the files in your destination might have a UID of
65534
, 99
, or nobody
.
This can happen if a file system involved in your transfer uses NFS version 4 ID mapping, a feature that DataSync doesn't support.
Action to take
You have a couple options to work around this issue:
-
Create a new location for the file system that uses NFS version 3 instead of version 4.
-
Disable NFS version 4 ID mapping on the file system.
Retry the transfer. Either option should resolve the issue.
My task failed with a Cannot
allocate memory
error
When your DataSync task fails with a Cannot allocate memory
error,
it can mean a few different things.
Action to take
Try the following until you no longer see the issue:
-
If your transfer involves an agent, make sure that the agent meets the virtual machine (VM) or Amazon EC2 instance requirements.
-
Split your transfer into multiple tasks by using filters. It's possible that you're trying to transfer more files or objects than what one DataSync task can handle.
-
If you still see the issue, contact AWS Support
.
My task failed with an input/output error
You can get an input/output error message if your storage system fails I/O requests from the DataSync agent. Common reasons for this include a server disk failure, changes to your firewall configuration, or a network router failure.
If the error involves an NFS file server or Hadoop Distributed File System (HDFS) cluster, use the following steps to resolve the error.
Actions to take (NFS)
First, check your NFS file server's logs and metrics to determine if the problem started on the NFS server. If yes, resolve that issue.
Next, check that your network configuration hasn't changed. To check if the NFS file server is configured correctly and that DataSync can access it, do the following:
-
Set up another NFS client on the same network subnet as the agent.
-
Mount your share on that client.
-
Validate that the client can read and write to the share successfully.
Actions to take (HDFS)
Do the following until you resolve the error:
Make sure that your HDFS cluster allows your DataSync agent to communicate with the cluster's NameNode and DataNode ports.
In most clusters, you can find the port numbers that the cluster uses in the following configuration files:
-
To find the NameNode port, look in the
core-site.xml
file under thefs.default
orfs.default.name
property (depending on the Hadoop distribution). -
To find the DataNode port, look in the
hdfs-site.xml
file under thedfs.datanode.address
property.
-
-
In your
hdfs-site.xml
file, verify that yourdfs.data.transfer.protection
property has only one value. For example:<property> <name>dfs.data.transfer.protection</name> <value>privacy</value> </property>
My task execution has a launching status but nothing seems to be happening
Your DataSync task can get stuck with a Launching status typically because the agent is powered off or has lost network connectivity.
Action to take
Make sure that your agent's status is ONLINE. If the agent is OFFLINE, make sure it's powered on.
If the agent is powered on and the task is still Launching, then there's likely a network connection problem between your agent and AWS. For information about how to test network connectivity, see Verifying your agent's connection to the DataSync service.
If you're still having this issue, see I don't know what's going on with my agent. Can someone help me?.
My task execution seems stuck in the preparing status
The time your DataSync transfer task has the Preparing status depends on the amount of data in your transfer source and destination and the performance of those storage systems.
When a task starts, DataSync performs a recursive directory listing to discover all files, objects, directories, and metadata in your source and destination. DataSync uses these listings to identify differences between storage systems and determine what to copy. This process can take a few minutes or even a few hours.
Action to take
You shouldn't have to do anything. Continue to wait for the task status to change
to Transferring. If the status still doesn't change,
contact AWS Support Center
How long does it take DataSync to verify a task I've run?
By default, DataSync verifies data integrity at the end of a transfer. How long verification takes depends on a number of factors. The number of files or objects, the total amount of data in the source and destination storage systems, and the performance of these systems affect how long verification takes. Verification includes an SHA256 checksum on all file content and an exact comparison of all file metadata.
Action to take
You shouldn't have to do anything. If task status still doesn't change to
Success or Error, contact
AWS Support Center
My task stops before the transfer finishes
If your DataSync task execution stops early, your task configuration might include an AWS Region that's disabled in your AWS account.
Actions to take
Do the following to run your task again:
-
Check the opt-in status of your task's Regions and make sure they're enabled.
-
Start the task again.
My task fails when transferring from a Google Cloud Storage bucket
Because DataSync communicates with Google Cloud Storage by using the Amazon S3 API, there's a limitation that might cause your DataSync transfer to fail if you try to copy object tags. The following message related to the issue appears in your CloudWatch logs:
[WARN] Failed to read metadata for file
/
your-bucket
/your-object
:
S3 Get Object Tagging Failed: proceeding without tagging
To prevent this, deselect the Copy object tags option when configuring your transfer task settings.
My task's start and end times don't match up with what's in the logs
Your task execution's start and end times that you see in the DataSync console may differ between timestamps you see elsewhere related to your transfer. This is because the console doesn’t take into account the time a task execution spends in the launching or queueing states.
For example, your Amazon CloudWatch logs can indicate that your task execution ended later than what's displayed in the DataSync console. You may notice a similar discrepancy in the following areas:
-
Logs for the file system or object storage system involved in your transfer
-
The last modified date on an Amazon S3 object that DataSync wrote to
-
Network traffic coming from the DataSync agent
-
Amazon EventBridge events
Error: SyncTaskDeletedByUser
You may see this error unexpectedly when automating some DataSync workflows. For example, maybe you have a script that's deleting your task before a task execution has finished or is in queue.
To fix this issue, reconfigure your automation so that these types of actions don't overlap.
Error: NoMem
The set of data you're trying to transfer may be too large for DataSync. If you see this
error, contact AWS Support Center
Error:
FsS3UnableToConnectToEndpoint
DataSync can't connect to your Amazon S3 location. This could mean the location's S3 bucket isn't reachable or the location isn't configured correctly.
Do the following until you resolve the issue:
-
Check if DataSync can access your S3 bucket.
-
Make your sure location is configured correctly by using the DataSync console or DescribeLocationS3 operation.
Error: FsS3HeadBucketFailed
DataSync can't access the S3 bucket that you're transferring to or from. Check if DataSync has permission to access the bucket by using the Amazon S3 HeadBucket operation. If you need to adjust your permissions, see Providing DataSync access to S3 buckets.
My task fails with
an Unable to list Azure Blobs on the volume root
error
If your DataSync transfer task fails with an Unable to list Azure Blobs on the
volume root
error, there might be an issue with your shared access
signature (SAS) token or your Azure storage account's network.
Actions to take
Try the following and run your task again until you fix the issue:
-
Make sure that your SAS token has the right permissions to access your Microsoft Azure Blob Storage.
-
If you're running your DataSync agent in Azure, configure your storage account to allow access from the virtual network where your agent resides.
-
If you're running your agent on Amazon EC2, configure your Azure storage firewall to allow access from the agent's public IP address.
For information on how to configure your Azure storage account's
network, see the Azure Blob Storage documentation
Object fails to transfer to Azure Blob Storage with
user metadata key
error
When transferring from an S3 bucket to Azure Blob Storage, you might see the following error:
[ERROR] Failed to transfer file
/user-metadata/file1
: Azure Blob user metadata key must be a CSharp identifier
This means that
includes user metadata that doesn't use a valid C# identifier. For more information,
see the Microsoft documentation/user-metadata/file1
Error: FsAzureBlobVolRootListBlobsFailed
The shared access signature (SAS) token that DataSync uses to access your Microsoft Azure Blob Storage doesn't have the List permission.
To resolve the issue, update your location with a token that has the List permission and try running your task again.
Error: SrcLocHitAccess
DataSync can't access your source location. Check that DataSync has permission to access the location and try running your task again.
Error: SyncTaskErrorLocationNotAdded
DataSync can't access your location. Check that DataSync has permission to access the location and try running your task again.
Task report errors
You might run into one of the following errors while trying to monitor your DataSync transfer with a task report.
Error message | Workaround |
---|---|
|
N/A (DataSync can't transfer a file with a path that exceeds 4,096 bytes) For more information, see Storage system, file, and object limits. |
|
Check that the DataSync IAM role has the right permissions to upload a task report to your S3 bucket. |
|
Check your CloudWatch logs to identify why your task execution failed. |
Task with Amazon S3 fails with
HeadObject
or GetObjectTagging
error
If you're transferring objects with specific version IDs from an S3 bucket, you might see an error related to HeadObject
or GetObjectTagging
. For example, here's an error related to GetObjectTagging
:
[WARN] Failed to read metadata for file
/picture1.png
(versionId:111111
): S3 Get Object Tagging Failed [ERROR] S3 Exception: op=GetObjectTaggingphotos/picture1.png
, code=403, type=15, exception=AccessDenied, msg=Access Denied req-hdrs: content-type=application/xml, x-amz-api-version=2006-03-01 rsp-hdrs: content-type=application/xml, date=Wed, 07 Feb 2024 20:16:14 GMT, server=AmazonS3, transfer-encoding=chunked, x-amz-id-2=IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km, x-amz-request-id=79104EXAMPLEB723
If you see either of these errors, validate that the IAM role that DataSync uses to access your S3 source location has the following permissions:
-
s3:GetObjectVersion
-
s3:GetObjectVersionTagging
If you need to update your role with these permissions, see Creating an IAM role for DataSync to access your Amazon S3 location.
Why is there an
/.aws-datasync
folder in my destination location?
DataSync creates a folder called /.aws-datasync
in your destination location
to help facilitate your data transfer.
While DataSync typically deletes this folder following your transfer, there might be situations where this doesn't happen.
Action to take
Delete this folder anytime as long as you don't have a running task execution copying to that location.