There are a number of configuration variables for tuning the performance of your MapReduce jobs. This section describes some of the important task-related settings.
Two configuration options determine how many tasks are run per node, one for mappers and the other for reducers. They are:
Amazon EMR provides defaults that are entirely dependent on the EC2 instance type. The following table shows the default settings for clusters launched with AMIs after 2.4.6.
|EC2 Instance Name||Mappers||Reducers|
The number of default mappers is based on the memory available on each EC2 instance type. If you increase the default number of mappers, you also need to modify the task JVM settings to decrease the amount of memory allocated to each task. Failure to modify the JVM settings appropriately could result in out of memory errors.
When your cluster runs, Hadoop creates a number of map and reduce tasks. These determine the number of tasks that can run simultaneously during your cluster. Run too few tasks and you have nodes sitting idle; run too many and there is significant framework overhead.
Amazon EMR determines the number of map tasks from the size and number of files of your input data. You configure the reducer setting. There are four settings you can modify to adjust the reducer setting.
The parameters for configuring the reducer setting are described in the following table.
|mapred.map.tasks||Target number of map tasks to run. The actual number of tasks created is sometimes different than this number.|
|mapred.map.tasksperslot||Target number of map tasks to run as a ratio to the number of
map slots in the cluster. This is used if
|mapred.reduce.tasks||Number of reduce tasks to run.|
|mapred.reduce.tasksperslot||Number of reduce tasks to run as a ratio of the number of reduce slots in the cluster.|
The two tasksperslot parameters are unique to Amazon EMR. They
only take effect if
mapred.*.tasks is not defined. The order of
mapred.map.tasks set by the Hadoop job
mapred.map.tasks set in
on the master node
mapred.map.tasksperslot if neither of those are
You can configure the amount of heap space for tasks as well as other JVM
options with the
mapred.child.java.opts setting. Amazon EMR provides a
-Xmx value in this location, with the defaults per instance
type shown in the following table.
|Amazon EC2 Instance Name||Default JVM value|
You can start a new JVM for every task, which provides better task isolation, or you can share JVMs between tasks, providing lower framework overhead. If you are processing many small files, it makes sense to reuse the JVM many times to amortize the cost of start-up. However, if each task takes a long time or processes a large amount of data, then you might choose to not reuse the JVM to ensure all memory is freed for subsequent tasks.
mapred.job.reuse.jvm.num.tasks option to configure the JVM reuse
Amazon EMR sets the value of
20, but you can override it with a bootstrap action. A value of
-1 means infinite reuse within a single job, and
1 means do not reuse tasks.
In a distributed environment, you are going to experience random delays, slow hardware, failing hardware, and other problems that collectively slow down your cluster. This is known as the stragglers problem. Hadoop has a feature called speculative execution that can help mitigate this issue. As the cluster progresses, some machines complete their tasks. Hadoop schedules tasks on nodes that are free. Whichever task finishes first is the successful one, and the other tasks are killed. This feature can substantially cut down on the run time of jobs. The general design of a mapreduce algorithm is such that the processing of map tasks is meant to be idempotent. However, if you are running a job where the task execution has side effects (for example, a zero reducer job that calls an external resource), it is important to disable speculative execution.
You can enable speculative execution for mappers and reducers independently. By default, Amazon EMR enables it for mappers and reducers in AMI 2.3. You can override these settings with a bootstrap action. For more information about using bootstrap actions, see (Optional) Create Bootstrap Actions to Install Additional Software.
Speculative Execution Parameters