Understanding node allocation strategy and scenarios - Amazon EMR

Understanding node allocation strategy and scenarios

This section gives an overview of node allocation strategy and common scaling scenarios that you can use with Amazon EMR managed scaling.

Node allocation strategy

Amazon EMR managed scaling allocates core and task nodes based on the following scale-up and scale-down strategies:

Scale-up strategy

  • Amazon EMR managed scaling first adds capacity to core nodes and then to task nodes until the maximum allowed capacity is reached or until the desired scale-up target capacity is achieved.

  • When Amazon EMR experiences a delay in scale-up with the current instance group, clusters that use managed scaling automatically switch to a different task instance group.

  • If the MaximumCoreCapacityUnits parameter is set, then Amazon EMR scales core nodes until the core units reach the maximum allowed limit. All the remaining capacity is added to task nodes.

  • If the MaximumOnDemandCapacityUnits parameter is set, then Amazon EMR scales the cluster by using the On-Demand Instances until the On-Demand units reach the maximum allowed limit. All the remaining capacity is added using Spot Instances.

  • If both the MaximumCoreCapacityUnits and MaximumOnDemandCapacityUnits parameters are set, Amazon EMR considers both limits during scaling.

    For example, if the MaximumCoreCapacityUnits is less than MaximumOnDemandCapacityUnits, Amazon EMR first scales core nodes until the core capacity limit is reached. For the remaining capacity, Amazon EMR first uses On-Demand Instances to scale task nodes until the On-Demand limit is reached, and then uses Spot Instances for task nodes.

Scale-down strategy

  • Amazon EMR versions 5.34.0 and higher, and Amazon EMR versions 6.4.0 and higher, support managed scaling that is aware of Spark shuffle data (data that Spark redistributes across partitions to perform specific operations). For more information on shuffle operations, see the Spark Programming Guide. Managed scaling scales-down only instances that are under-utilized and which do not contain actively used shuffle data. This intelligent scaling prevents unintended shuffle data loss, avoiding the need for job re-attempts and recomputation of intermediate data.

  • Amazon EMR managed scaling first removes task nodes and then removes core nodes until the desired scale-down target capacity is achieved. The cluster never scales below the minimum constraints in the managed scaling policy.

  • Within each node type (either core nodes or task nodes), Amazon EMR managed scaling removes Spot Instances first and then removes On-Demand Instances.

  • For clusters that are launched with Amazon EMR 5.x releases 5.34.0 and higher, and 6.x releases 6.4.0 and higher, Amazon EMR-managed scaling doesn’t scale down nodes that have ApplicationMaster for Apache Spark running on them. This minimizes job failures and retries, which helps to improve job performance and reduce costs. To confirm which nodes in your cluster are running ApplicationMaster, visit the Spark History Server and filter for the driver under the Executors tab of your Spark application ID.

If the cluster does not have any load, then Amazon EMR cancels the addition of new instances from a previous evaluation and performs scale-down operations. If the cluster has a heavy load, Amazon EMR cancels the removal of instances and performs scale-up operations.

Node allocation considerations

We recommend that you use the On-Demand purchasing option for core nodes to avoid HDFS data loss in case of Spot reclamation. You can use the Spot purchasing option for task nodes to reduce costs and get faster job execution when more Spot Instances are added to task nodes.

Node allocation scenarios

You can create various scaling scenarios based on your needs by setting up the Maximum, Minimum, On-Demand limit, and Maximum core node parameters in different combinations.

Scenario 1: Scale Core Nodes Only

To scale core nodes only, the managed scaling parameters must meet the following requirements:

  • The On-Demand limit is equal to the maximum boundary.

  • The maximum core node is equal to the maximum boundary.

When the On-Demand limit and the maximum core node parameters are not specified, both parameters default to the maximum boundary.

The following examples demonstrate the scenario of scaling cores nodes only.

Cluster initial state Scaling parameters Scaling behavior

Instance groups

Core: 1 On-Demand

Task: 1 On-Demand and 1 Spot

UnitType: Instances

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 20

MaximumCoreCapacityUnits: 20

Scale between 1 to 20 Instances or instance fleet units on core nodes using On-Demand type. No scaling on task nodes.

Instance fleets

Core: 1 On-Demand

Task: 1 On-Demand and 1 Spot

UnitType: InstanceFleetUnits

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 20

MaximumCoreCapacityUnits: 20

Scenario 2: Scale task nodes only

To scale task nodes only, the managed scaling parameters must meet the following requirement:

  • The maximum core node must be equal to the minimum boundary.

The following examples demonstrate the scenario of scaling task nodes only.

Cluster initial state Scaling parameters Scaling behavior

Instance groups

Core: 2 On-Demand

Task: 1 Spot

UnitType: Instances

MinimumCapacityUnits: 2

MaximumCapacityUnits: 20

MaximumCoreCapacityUnits: 2

Keep core nodes steady at 2 and only scale task nodes between 0 to 18 instances or instance fleet units. The capacity between minimum and maximum boundaries is added to the task nodes only.

Instance fleets

Core: 2 On-Demand

Task: 1 Spot

UnitType: InstanceFleetUnits

MinimumCapacityUnits: 2

MaximumCapacityUnits: 20

MaximumCoreCapacityUnits: 2

Scenario 3: Only On-Demand Instances in the cluster

To have On-Demand Instances only, your cluster and the managed scaling parameters must meet the following requirement:

  • The On-Demand limit is equal to the maximum boundary.

    When the On-Demand limit is not specified, the parameter value defaults to the maximum boundary. The default value indicates that Amazon EMR scales On-Demand Instances only.

If the maximum core node is less than the maximum boundary, the maximum core node parameter can be used to split capacity allocation between core and task nodes.

To enable this scenario in a cluster composed of instance groups, all node groups in the cluster must use the On-Demand market type during initial configuration.

The following examples demonstrate the scenario of having On-Demand Instances in the entire cluster.

Cluster initial state Scaling parameters Scaling behavior

Instance groups

Core: 1 On-Demand

Task: 1 On-Demand

UnitType: Instances

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 20

MaximumCoreCapacityUnits: 12

Scale between 1 to 12 instances or instance fleet units on core nodes using On-Demand type. Scale the remaining capacity using On-Demand on task nodes. No scaling using Spot Instances.

Instance fleets

Core: 1 On-Demand

Task: 1 On-Demand

UnitType: InstanceFleetUnits

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 20

MaximumCoreCapacityUnits: 12

Scenario 4: Only Spot Instances in the cluster

To have Spot Instances only, the managed scaling parameters must meet the following requirement:

  • On-Demand limit is set to 0.

If the maximum core node is less than the maximum boundary, the maximum core node parameter can be used to split capacity allocation between core and task nodes.

To enable this scenario in a cluster composed of instance groups, the core instance group must use the Spot purchasing option during initial configuration. If there is no Spot Instance in the task instance group, Amazon EMR managed scaling creates a task group using Spot Instances when needed.

The following examples demonstrate the scenario of having Spot Instances in the entire cluster.

Cluster initial state Scaling parameters Scaling behavior

Instance groups

Core: 1 Spot

Task: 1 Spot

UnitType: Instances

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 0

Scale between 1 to 20 instances or instance fleet units on core nodes using Spot. No scaling using On-Demand type.

Instance fleets

Core: 1 Spot

Task: 1 Spot

UnitType: InstanceFleetUnits

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 0

Scenario 5: Scale On-Demand Instances on core nodes and Spot Instances on task nodes

To scale On-Demand Instances on core nodes and Spot Instances on task nodes, the managed scaling parameters must meet the following requirements:

  • The On-Demand limit must be equal to the maximum core node.

  • Both the On-Demand limit and the maximum core node must be less than the maximum boundary.

To enable this scenario in a cluster composed of instance groups, the core node group must use the On-Demand purchasing option.

The following examples demonstrate the scenario of scaling On-Demand Instances on core nodes and Spot Instances on task nodes.

Cluster initial state Scaling parameters Scaling behavior

Instance groups

Core: 1 On-Demand

Task: 1 On-Demand and 1 Spot

UnitType: Instances

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 7

MaximumCoreCapacityUnits: 7

Scale up to 6 On-Demand units on the core node since there is already 1 On-Demand unit on the task node and the maximum limit for On-Demand is 7. Then scale up to 13 Spot units on task nodes.

Instance fleets

Core: 1 On-Demand

Task: 1 On-Demand and 1 Spot

UnitType: InstanceFleetUnits

MinimumCapacityUnits: 1

MaximumCapacityUnits: 20

MaximumOnDemandCapacityUnits: 7

MaximumCoreCapacityUnits: 7