Amazon EMR
Amazon EMR Release Guide

Process Data with Streaming

Hadoop Streaming is a utility that comes with Hadoop that enables you to develop MapReduce executables in languages other than Java. Streaming is implemented in the form of a JAR file, so you can run it from the Amazon EMR (Amazon EMR) API or command line just like a standard JAR file.

This section describes how to use Streaming with Amazon EMR.


Apache Hadoop Streaming is an independent tool. As such, all of its functions and parameters are not described here. For more information about Hadoop Streaming, go to

Using the Hadoop Streaming Utility

This section describes how use to Hadoop's Streaming utility.

Hadoop Process


Write your mapper and reducer executable in the programming language of your choice.

Follow the directions in Hadoop's documentation to write your streaming executables. The programs should read their input from standard input and output data through standard output. By default, each line of input/output represents a record and the first tab on each line is used as a separator between the key and value.


Test your executables locally and upload them to Amazon S3.


Use the Amazon EMR command line interface or Amazon EMR console to run your application.

Each mapper script launches as a separate process in the cluster. Each reducer executable turns the output of the mapper executable into the data output by the job flow.

The input, output, mapper, and reducer parameters are required by most Streaming applications. The following table describes these and other, optional parameters.


Location on Amazon S3 of the input data.

Type: String

Default: None

Constraint: URI. If no protocol is specified then it uses the cluster's default file system.


Location on Amazon S3 where Amazon EMR uploads the processed data.

Type: String

Default: None

Constraint: URI

Default: If a location is not specified, Amazon EMR uploads the data to the location specified by input.


Name of the mapper executable.

Type: String

Default: None


Name of the reducer executable.

Type: String

Default: None


An Amazon S3 location containing files for Hadoop to copy into your local working directory (primarily to improve performance).

Type: String

Default: None

Constraints: [URI]#[symlink name to create in working directory]


JAR file to extract into the working directory

Type: String

Default: None

Constraints: [URI]#[symlink directory name to create in working directory


Combines results

Type: String

Default: None

Constraints: Java class name


The following code sample is a mapper executable written in Python. This script is part of the WordCount sample application.

import sys

def main(argv):
  line = sys.stdin.readline()
    while line:
      line = line.rstrip()
      words = line.split()
      for word in words:
        print "LongValueSum:" + word + "\t" + "1"
      line = sys.stdin.readline()
  except "end of file":
    return None
if __name__ == "__main__":