Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Connectors and utilities - Amazon EMR

Connectors and utilities

Amazon EMR provides several connectors and utilities to access other AWS services as data sources. You can usually access data in these services within a program. For example, you can specify an Kinesis stream in a Hive query, Pig script, or MapReduce application and then operate on that data.

Cleaning up after failed S3DistCp jobs

If S3DistCp cannot copy some or all of the specified files, the command or cluster step fails and returns a non-zero error code. If this occurs, S3DistCp does not clean up partially copied files. You must delete them manually.

Partially copied files are saved to the HDFS tmp directory in sub-directories with the unique identifier of the S3DistCp job. You can find this ID in the standard output of the job.

For example, for an S3DistCp job with the ID 4b1c37bb-91af-4391-aaf8-46a6067085a6, you can connect to the master node of the cluster and run the following command to view output files associated with the job.

hdfs dfs -ls /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output

The command returns a list of files similar to the following:

Found 8 items -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:03 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/_SUCCESS -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:02 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00000 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:02 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00001 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:02 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00002 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:03 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00003 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:03 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00004 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:03 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00005 -rw-r‑‑r‑‑ 1 hadoop hadoop 0 2018-12-10 06:03 /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6/output/part-r-00006

You can then run the following command to delete the directory and all contents.

hdfs dfs rm -rf /tmp/4b1c37bb-91af-4391-aaf8-46a6067085a6
PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.