AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Name | Description | |
---|---|---|
AddInstanceGroup | Class representing creating a new instance group. | |
AddInstanceGroupsRequest | Container for the parameters to the AddInstanceGroups operation. AddInstanceGroups adds an instance group to a running cluster. | |
AddInstanceGroupsResponse | Configuration for accessing Amazon AddInstanceGroups service | |
AddInstanceGroupsResult | Output from an AddInstanceGroups call. | |
AddJobFlowStepsRequest |
Container for the parameters to the AddJobFlowSteps operation.
AddJobFlowSteps adds new steps to a running job flow. A maximum of 256 steps are
allowed in each job flow.
If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide. A step specifies the location of a JAR file stored either on the master node of the job flow or in Amazon S3. Each step is performed by the main function of the main class of the JAR file. The main class can be specified either in the manifest of the JAR or by using the MainFunction parameter of the step. Elastic MapReduce executes each step in the order listed. For a step to be considered complete, the main function must exit with a zero exit code and all Hadoop jobs started while the step was running must have completed and run successfully. You can only add steps to a job flow that is in one of the following states: STARTING, BOOTSTRAPPING, RUNNING, or WAITING. |
|
AddJobFlowStepsResponse | Configuration for accessing Amazon AddJobFlowSteps service | |
AddJobFlowStepsResult | The output for the AddJobFlowSteps operation. | |
AddTagsRequest | Container for the parameters to the AddTags operation. Adds tags to an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see Tagging Amazon EMR Resources. | |
AddTagsResponse | Configuration for accessing Amazon AddTags service | |
AddTagsResult | This output indicates the result of adding tags to a resource. | |
Application |
An application is any Amazon or third-party software that you can add to the cluster.
This structure contains a list of strings that indicates the software to use with
the cluster and accepts a user argument list. Amazon EMR accepts and forwards the
argument list to the corresponding installation script as bootstrap action argument.
For more information, see Launch
a Job Flow on the MapR Distribution for Hadoop. Currently supported values are:
In Amazon EMR releases 4.0 and greater, the only accepted parameter is the application name. To pass arguments to applications, you supply a configuration for each application. |
|
BootstrapActionConfig | Configuration of a bootstrap action. | |
BootstrapActionDetail | Reports the configuration of a bootstrap action in a job flow. | |
BootstrapActions | Class that provides helper methods for constructing predefined bootstrap actions. | |
Cluster | The detailed description of the cluster. | |
ClusterStateChangeReason | The reason that the cluster changed to its current state. | |
ClusterStatus | The detailed status of the cluster. | |
ClusterSummary | The summary description of the cluster. | |
ClusterTimeline | Represents the timeline of the cluster's lifecycle. | |
Command | An entity describing an executable that runs on a cluster. | |
Configuration |
Amazon EMR releases 4.x or later. Specifies a hardware and software configuration of the EMR cluster. This includes configurations for applications and software bundled with Amazon EMR. The Configuration object is a JSON object which is defined by a classification and a set of properties. Configurations can be nested, so a configuration may have its own Configuration objects listed. |
|
ConfigureDaemons | ||
ConfigureHadoop | ||
DescribeClusterRequest | Container for the parameters to the DescribeCluster operation. Provides cluster-level details including status, hardware and software configuration, VPC settings, and so on. For information about the cluster steps, see ListSteps. | |
DescribeClusterResponse | Configuration for accessing Amazon DescribeCluster service | |
DescribeClusterResult | This output contains the description of the cluster. | |
DescribeJobFlowsRequest |
Container for the parameters to the DescribeJobFlows operation.
This API is deprecated and will eventually be removed. We recommend you use ListClusters,
DescribeCluster, ListSteps, ListInstanceGroups and ListBootstrapActions
instead.
DescribeJobFlows returns a list of job flows that match all of the supplied parameters. The parameters can include a list of job flow IDs, job flow states, and restrictions on job flow creation date and time. Regardless of supplied parameters, only job flows created within the last two months are returned. If no parameters are supplied, then job flows matching either of the following criteria are returned:
Amazon Elastic MapReduce can return a maximum of 512 job flow descriptions. |
|
DescribeJobFlowsResponse | Configuration for accessing Amazon DescribeJobFlows service | |
DescribeJobFlowsResult | The output for the DescribeJobFlows operation. | |
DescribeStepRequest | Container for the parameters to the DescribeStep operation. Provides more detail about the cluster step. | |
DescribeStepResponse | Configuration for accessing Amazon DescribeStep service | |
DescribeStepResult | This output contains the description of the cluster step. | |
EbsBlockDevice | Configuration of requested EBS block device associated with the instance group. | |
EbsBlockDeviceConfig | Configuration of requested EBS block device associated with the instance group with count of volumes that will be associated to every instance. | |
EbsConfiguration | ||
EbsVolume | EBS block device that's attached to an EC2 instance. | |
Ec2InstanceAttributes | Provides information about the EC2 instances in a cluster grouped by category. For example, key name, subnet ID, IAM instance profile, and so on. | |
HadoopJarStepConfig | A job flow step consisting of a JAR file whose main function will be executed. The main function submits a job for Hadoop to execute and waits for the job to finish or fail. | |
HadoopStepConfig | A cluster step consisting of a JAR file whose main function will be executed. The main function submits a job for Hadoop to execute and waits for the job to finish or fail. | |
Instance | Represents an EC2 instance provisioned as part of cluster. | |
InstanceGroup | This entity represents an instance group, which is a group of instances that have common purpose. For example, CORE instance group is used for HDFS. | |
InstanceGroupConfig | Configuration defining a new instance group. | |
InstanceGroupDetail | Detailed information about an instance group. | |
InstanceGroupModifyConfig | Modify an instance group size. | |
InstanceGroupStateChangeReason | The status change reason details for the instance group. | |
InstanceGroupStatus | The details of the instance group status. | |
InstanceGroupTimeline | The timeline of the instance group lifecycle. | |
InstanceStateChangeReason | The details of the status change reason for the instance. | |
InstanceStatus | The instance status details. | |
InstanceTimeline | The timeline of the instance lifecycle. | |
InternalServerErrorException | ElasticMapReduce exception | |
InternalServerException | ElasticMapReduce exception | |
InvalidRequestException | ElasticMapReduce exception | |
JobFlowDetail | A description of a job flow. | |
JobFlowExecutionStatusDetail | Describes the status of the job flow. | |
JobFlowInstancesConfig | A description of the Amazon EC2 instance running the job flow. A valid JobFlowInstancesConfig must contain at least InstanceGroups, which is the recommended configuration. However, a valid alternative is to have MasterInstanceType, SlaveInstanceType, and InstanceCount (all three must be present). | |
JobFlowInstancesDetail | Specify the type of Amazon EC2 instances to run the job flow on. | |
KeyValue | A key value pair. | |
ListBootstrapActionsRequest | Container for the parameters to the ListBootstrapActions operation. Provides information about the bootstrap actions associated with a cluster. | |
ListBootstrapActionsResponse | Configuration for accessing Amazon ListBootstrapActions service | |
ListBootstrapActionsResult | This output contains the boostrap actions detail . | |
ListClustersRequest | Container for the parameters to the ListClusters operation. Provides the status of all clusters visible to this AWS account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls. | |
ListClustersResponse | Configuration for accessing Amazon ListClusters service | |
ListClustersResult | This contains a ClusterSummaryList with the cluster details; for example, the cluster IDs, names, and status. | |
ListInstanceGroupsRequest | Container for the parameters to the ListInstanceGroups operation. Provides all available details about the instance groups in a cluster. | |
ListInstanceGroupsResponse | Configuration for accessing Amazon ListInstanceGroups service | |
ListInstanceGroupsResult | This input determines which instance groups to retrieve. | |
ListInstancesRequest | Container for the parameters to the ListInstances operation. Provides information about the cluster instances that Amazon EMR provisions on behalf of a user when it creates the cluster. For example, this operation indicates when the EC2 instances reach the Ready state, when instances become available to Amazon EMR to use for jobs, and the IP addresses for cluster instances, etc. | |
ListInstancesResponse | Configuration for accessing Amazon ListInstances service | |
ListInstancesResult | This output contains the list of instances. | |
ListStepsRequest | Container for the parameters to the ListSteps operation. Provides a list of steps for the cluster. | |
ListStepsResponse | Configuration for accessing Amazon ListSteps service | |
ListStepsResult | This output contains the list of steps. | |
ModifyInstanceGroup | ||
ModifyInstanceGroupsRequest | Container for the parameters to the ModifyInstanceGroups operation. ModifyInstanceGroups modifies the number of nodes and configuration settings of an instance group. The input parameters include the new target instance count for the group and the instance group ID. The call will either succeed or fail atomically. | |
ModifyInstanceGroupsResponse | ||
PlacementType | The Amazon EC2 location for the job flow. | |
RemoveTagsRequest |
Container for the parameters to the RemoveTags operation.
Removes tags from an Amazon EMR resource. Tags make it easier to associate clusters
in various ways, such as grouping clusters to track your Amazon EMR resource allocation
costs. For more information, see Tagging
Amazon EMR Resources.
The following example removes the stack tag with value Prod from a cluster: |
|
RemoveTagsResponse | Configuration for accessing Amazon RemoveTags service | |
RemoveTagsResult | This output indicates the result of removing tags from a resource. | |
ResizeJobFlowStep |
This class provides some helper methods for creating a Resize Job Flow step
as part of your job flow. The resize step can be used to automatically
adjust the composition of your cluster while it is running. For example, if
you have a large workflow with different compute requirements, you can use
this step to automatically add a task instance group before your most compute
intensive step.
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey); IAmazonElasticMapReduce emr = new AmazonElasticMapReduceClient(credentials); var resize = new ResizeJobFlowStep { OnArrested = OnArrested.Continue, OnFailure = OnFailure.Continue }; resize.AddResizeAction(new AddInstanceGroup { InstanceGroup = "core", InstanceCount = 10 }); resize.AddResizeAction(new AddInstanceGroup { InstanceGroup = "task", InstanceCount = 10, WithInstanceType = "m1.small" }); HadoopJarStepConfig config = resize.ToHadoopJarStepConfig(); StepConfig resizeJobFlow = new StepConfig { Name = "Resize job flow", ActionOnFailure = "TERMINATE_JOB_FLOW", HadoopJarStep = config, }; RunJobFlowRequest request = new RunJobFlowRequest { Name = "Resize job flow", Steps = new List<StepConfig> { resizeJobFlow }, LogUri = "s3://log-bucket/", Instances = new JobFlowInstancesConfig { Ec2KeyName = "keypair", HadoopVersion = "0.20", InstanceCount = 5, KeepJobFlowAliveWhenNoSteps = true, MasterInstanceType = "m1.small", SlaveInstanceType = "m1.small" } }; RunJobFlowResponse response = emr.RunJobFlow(request); |
|
RunJobFlowRequest |
Container for the parameters to the RunJobFlow operation.
RunJobFlow creates and starts running a new job flow. The job flow will run the steps
specified. Once the job flow completes, the cluster is stopped and the HDFS partition
is lost. To prevent loss of data, configure the last step of the job flow to store
results in Amazon S3. If the JobFlowInstancesConfigKeepJobFlowAliveWhenNoSteps
parameter is set to TRUE , the job flow will transition to the WAITING
state rather than shutting down once the steps have completed.
For additional protection, you can set the JobFlowInstancesConfig A maximum of 256 steps are allowed in each job flow. If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide. For long running job flows, we recommend that you periodically store your results. |
|
RunJobFlowResponse | Configuration for accessing Amazon RunJobFlow service | |
RunJobFlowResult | The result of the RunJobFlow operation. | |
ScriptBootstrapActionConfig | Configuration of the script to run during a bootstrap action. | |
SetTerminationProtectionRequest |
Container for the parameters to the SetTerminationProtection operation.
SetTerminationProtection locks a job flow so the Amazon EC2 instances in the cluster
cannot be terminated by user intervention, an API call, or in the event of a job-flow
error. The cluster still terminates upon successful completion of the job flow. Calling
SetTerminationProtection on a job flow is analogous to calling the Amazon EC2 DisableAPITermination
API on all of the EC2 instances in a cluster.
SetTerminationProtection is used to prevent accidental termination of a job flow and to ensure that in the event of an error, the instances will persist so you can recover any data stored in their ephemeral instance storage.
To terminate a job flow that has been locked by setting SetTerminationProtection
to For more information, go to Protecting a Job Flow from Termination in the Amazon Elastic MapReduce Developer's Guide. |
|
SetTerminationProtectionResponse | ||
SetVisibleToAllUsersRequest |
Container for the parameters to the SetVisibleToAllUsers operation.
Sets whether all AWS Identity and Access Management (IAM) users under your account
can access the specified job flows. This action works on running job flows. You can
also set the visibility of a job flow when you launch it using the VisibleToAllUsers
parameter of RunJobFlow. The SetVisibleToAllUsers action can be called only
by an IAM user who created the job flow or the AWS account that owns the job flow.
|
|
SetVisibleToAllUsersResponse | ||
Step | This represents a step in a cluster. | |
StepConfig | Specification of a job flow step. | |
StepDetail | Combines the execution state and configuration of a step. | |
StepExecutionStatusDetail | The execution state of a step. | |
StepFactory | This class provides helper methods for creating common Elastic MapReduce step types. To use StepFactory, you should construct it with the appropriate bucket for your region. The official bucket format is "<region>.elasticmapreduce", so us-east-1 would use the bucket "us-east-1.elasticmapreduce". | |
StepStateChangeReason | The details of the step state change reason. | |
StepStatus | The execution status details of the cluster step. | |
StepSummary | The summary of the cluster step. | |
StepTimeline | The timeline of the cluster step lifecycle. | |
StreamingStep |
Class that makes it easy to define Hadoop Streaming steps.
See also: Hadoop Streaming AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey); IAmazonElasticMapReduce emr = new AmazonElasticMapReduceClient(credentials); HadoopJarStepConfig config = new StreamingStep { Inputs = new List<string> { "s3://elasticmapreduce/samples/wordcount/input" }, Output = "s3://my-bucket/output/", Mapper = "s3://elasticmapreduce/samples/wordcount/wordSplitter.py", Reducer = "aggregate" }.ToHadoopJarStepConfig(); StepConfig wordCount = new StepConfig { Name = "Word Count", ActionOnFailure = "TERMINATE_JOB_FLOW", HadoopJarStep = config }; RunJobFlowRequest request = new RunJobFlowRequest { Name = "Word Count", Steps = new List<StepConfig> { wordCount }, LogUri = "s3://log-bucket/", Instances = new JobFlowInstancesConfig { Ec2KeyName = "keypair", HadoopVersion = "0.20", InstanceCount = 5, KeepJobFlowAliveWhenNoSteps = true, MasterInstanceType = "m1.small", SlaveInstanceType = "m1.small" } }; RunJobFlowResponse response = emr.RunJobFlow(request); |
|
SupportedProductConfig | The list of supported product configurations which allow user-supplied arguments. EMR accepts these arguments and forwards them to the corresponding installation script as bootstrap action arguments. | |
Tag | A key/value pair containing user-defined metadata that you can associate with an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clu\ sters to track your Amazon EMR resource allocation costs. For more information, see Tagging Amazon EMR Resources. | |
TerminateJobFlowsRequest |
Container for the parameters to the TerminateJobFlows operation.
TerminateJobFlows shuts a list of job flows down. When a job flow is shut down, any
step not yet completed is canceled and the EC2 instances on which the job flow is
running are stopped. Any log files not already saved are uploaded to Amazon S3 if
a LogUri was specified when the job flow was created.
The maximum number of JobFlows allowed is 10. The call to TerminateJobFlows is asynchronous. Depending on the configuration of the job flow, it may take up to 5-20 minutes for the job flow to completely terminate and release allocated resources, such as Amazon EC2 instances. |
|
TerminateJobFlowsResponse | ||
VolumeSpecification | EBS volume specifications such as volume type, IOPS, and size(GiB) that will be requested for the EBS volume attached to an EC2 instance in the cluster. |
Name | Description | |
---|---|---|
ResizeAction |
Name | Description | |
---|---|---|
ConfigFile | Valid config files. | |
Daemon | List of Hadoop daemons which can be configured. | |
OnArrested |
The action to take if your step is waiting for the instance group to start
and it enters the Arrested state.
Fail - Fail the step. Wait - Continue waiting until the instance group is no longer arrested (requires manual intervention). Continue - Proceed onto the next step. |
|
OnFailure | Action to take if there is a failure modifying your cluster composition. Fail - Fail the step. Continue - Proceed on to the next step. |