For additional protection, you can set the JobFlowInstancesConfig
A maximum of 256 steps are allowed in each job flow.
If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide.
For long running job flows, we recommend that you periodically store your results.
- request (RunJobFlowRequest)
- Container for the necessary parameters to execute the RunJobFlow service method.
Indicates that an error occurred while processing the request and that the request
was not completed.