Amazon EMR
Developer Guide

Task Configuration (Hadoop 20.205)

There are a number of configuration variables for tuning the performance of your MapReduce jobs. This section describes some of the important task-related settings.

Tasks per Machine

Two configuration options determine how many tasks are run per node, one for mappers and the other for reducers. They are:


  • mapred.tasktracker.reduce.tasks.maximum

Amazon EMR provides defaults that are entirely dependent on the EC2 instance type. The following table shows the default settings for clusters launched with AMI 2.0 or 2.1.

Amazon EC2 Instance Name Mappers Reducers
m1.small 2 1
m1.medium 2 1
m1.large 3 1
m1.xlarge 8 3
c1.medium 2 1
c1.xlarge 7 2
m2.xlarge 3 1
m2.2xlarge 6 2
m2.4xlarge 14 4
cc2.8xlarge 24 6
cg1.4xlarge 12 3


The number of default mappers is based on the memory available on each EC2 instance type. If you increase the default number of mappers, you also need to modify the task JVM settings to decrease the amount of memory allocated to each task. Failure to modify the JVM settings appropriately could result in out of memory errors.

Tasks per Job (AMI 2.0 and 2.1)

When your cluster runs, Hadoop creates a number of map and reduce tasks. These determine the number of tasks that can run simultaneously during your cluster. Run too few tasks and you have nodes sitting idle; run too many and there is significant framework overhead.

Amazon EMR determines the number of map tasks from the size and number of files of your input data. You configure the reducer setting. There are four settings you can modify to adjust the reducer setting.

The parameters for configuring the reducer setting are described in the following table.

Parameter Description Target number of map tasks to run. The actual number of tasks created is sometimes different than this number. Target number of map tasks to run as a ratio to the number of map slots in the cluster. This is used if is not set.
mapred.reduce.tasks Number of reduce tasks to run.
mapred.reduce.tasksperslot Number of reduce tasks to run as a ratio of the number of reduce slots in the cluster.

The two tasksperslot parameters are unique to Amazon EMR. They only take effect if mapred.*.tasks is not defined. The order of precedence is:

  1. set by the Hadoop job

  2. set in mapred-conf.xml on the master node

  3. if neither of the above are defined

Task JVM Settings (AMI 2.0 and 2.1)

You can configure the amount of heap space for tasks as well as other JVM options with the setting. Amazon EMR provides a default -Xmx value in this location, with the defaults per instance type shown in the following table.

Amazon EC2 Instance Name Default JVM value
m1.small -Xmx384m
m1.medium -Xmx768m
m1.large -Xmx1152m
m1.xlarge -Xmx1024m
c1.medium -Xmx384m
c1.xlarge -Xmx512m
m2.xlarge -Xmx3072m
m2.2xlarge -Xmx3584m
m2.4xlarge -Xmx3072m
cc2.8xlarge -Xmx2048m
cg1.4xlarge -Xmx1152m

You can start a new JVM for every task, which provides better task isolation, or you can share JVMs between tasks, providing lower framework overhead. If you are processing many small files, it makes sense to reuse the JVM many times to amortize the cost of start-up. However, if each task takes a long time or processes a large amount of data, then you might choose to not reuse the JVM to ensure all memory is freed for subsequent tasks.

Use the mapred.job.reuse.jvm.num.tasks option to configure the JVM reuse settings.


Amazon EMR sets the value of mapred.job.reuse.jvm.num.tasks to 20, but you can override it with a bootstrap action. A value of -1 means infinite reuse within a single job, and 1 means do not reuse tasks.

Avoiding Cluster Slowdowns (AMI 2.0 and 2.1)

In a distributed environment, you are going to experience random delays, slow hardware, failing hardware, and other problems that collectively slow down your cluster. This is known as the stragglers problem. Hadoop has a feature called speculative execution that can help mitigate this issue. As the cluster progresses, some machines complete their tasks. Hadoop schedules tasks on nodes that are free. Whichever task finishes first is the successful one, and the other tasks are killed. This feature can substantially cut down on the run time of jobs. The general design of a mapreduce algorithm is such that the processing of map tasks is meant to be idempotent. However, if you are running a job where the task execution has side effects (for example, a zero reducer job that calls an external resource), it is important to disable speculative execution.

You can enable speculative execution for mappers and reducers independently. By default, Amazon EMR enables it for mappers and reducers in AMI 2.0 or 2.1. You can override these settings with a bootstrap action. For more information about using bootstrap actions, see (Optional) Create Bootstrap Actions to Install Additional Software.

Speculative Execution Parameters

Parameter Default Setting true
mapred.reduce.tasks.speculative.execution true