AWS SDK for .NET Documentation
Amazon.ElasticMapReduce.Model Namespace
AmazonAmazon.ElasticMapReduce.Model Did this page help you?   Yes   No    Tell us about it...
 
Declaration Syntax
C#
namespace Amazon.ElasticMapReduce.Model
Types
All TypesClassesInterfacesEnumerations
IconTypeDescription
AddInstanceGroup
Class representing creating a new instance group.

AddInstanceGroupsRequest
Container for the parameters to the AddInstanceGroups operation. AddInstanceGroups adds an instance group to a running cluster.

AddInstanceGroupsResponse
Returns information about the AddInstanceGroupsResult response and response metadata.

AddInstanceGroupsResult
Output from an AddInstanceGroups call.

AddJobFlowStepsRequest
Container for the parameters to the AddJobFlowSteps operation. AddJobFlowSteps adds new steps to a running job flow. A maximum of 256 steps are allowed in each job flow.

If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide.

A step specifies the location of a JAR file stored either on the master node of the job flow or in Amazon S3. Each step is performed by the main function of the main class of the JAR file. The main class can be specified either in the manifest of the JAR or by using the MainFunction parameter of the step.

Elastic MapReduce executes each step in the order listed. For a step to be considered complete, the main function must exit with a zero exit code and all Hadoop jobs started while the step was running must have completed and run successfully.

You can only add steps to a job flow that is in one of the following states: STARTING, BOOTSTRAPPING, RUNNING, or WAITING.


AddJobFlowStepsResponse
Returns information about the AddJobFlowStepsResult response and response metadata.

AddJobFlowStepsResult
The output for the AddJobFlowSteps operation.

AddTagsRequest
Container for the parameters to the AddTags operation. Adds tags to an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see Tagging Amazon EMR Resources.

AddTagsResponse
Returns information about the AddTagsResult response and response metadata.

AddTagsResult
This output indicates the result of adding tags to a resource.

Application
An application is any Amazon or third-party software that you can add to the cluster. This structure contains a list of strings that indicates the software to use with the cluster and accepts a user argument list. Amazon EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action argument. For more information, see Launch a Job Flow on the MapR Distribution for Hadoop. Currently supported values are:
  • "mapr-m3" - launch the job flow using MapR M3 Edition.
  • "mapr-m5" - launch the job flow using MapR M5 Edition.
  • "mapr" with the user arguments specifying "--edition,m3" or "--edition,m5" - launch the job flow using MapR M3 or M5 Edition, respectively.

BootstrapActionConfig
Configuration of a bootstrap action.

BootstrapActionDetail
Reports the configuration of a bootstrap action in a job flow.

BootstrapActions
Class that provides helper methods for constructing predefined bootstrap actions.

Cluster
The detailed description of the cluster.

ClusterStateChangeReason
The reason that the cluster changed to its current state.

ClusterStatus
The detailed status of the cluster.

ClusterSummary
The summary description of the cluster.

ClusterTimeline
Represents the timeline of the cluster's lifecycle.

Command
An entity describing an executable that runs on a cluster.

ConfigFile
Valid config files.

ConfigureDaemons
ConfigureHadoop
Daemon
List of Hadoop daemons which can be configured.

DescribeClusterRequest
Container for the parameters to the DescribeCluster operation. Provides cluster-level details including status, hardware and software configuration, VPC settings, and so on. For information about the cluster steps, see ListSteps.

DescribeClusterResponse
Returns information about the DescribeClusterResult response and response metadata.

DescribeClusterResult
This output contains the description of the cluster.

DescribeJobFlowsRequest
Container for the parameters to the DescribeJobFlows operation. This API is deprecated and will eventually be removed. We recommend you use ListClusters, DescribeCluster, ListSteps, ListInstanceGroups and ListBootstrapActions instead.

DescribeJobFlows returns a list of job flows that match all of the supplied parameters. The parameters can include a list of job flow IDs, job flow states, and restrictions on job flow creation date and time.

Regardless of supplied parameters, only job flows created within the last two months are returned.

If no parameters are supplied, then job flows matching either of the following criteria are returned:

  • Job flows created and completed in the last two weeks
  • Job flows created within the last two months that are in one of the following states:
    CopyC#
    RUNNING
    ,
    CopyC#
    WAITING
    ,
    CopyC#
    SHUTTING_DOWN
    ,
    CopyC#
    STARTING

Amazon Elastic MapReduce can return a maximum of 512 job flow descriptions.


DescribeJobFlowsResponse
Returns information about the DescribeJobFlowsResult response and response metadata.

DescribeJobFlowsResult
The output for the DescribeJobFlows operation.

DescribeStepRequest
Container for the parameters to the DescribeStep operation. Provides more detail about the cluster step.

DescribeStepResponse
Returns information about the DescribeStepResult response and response metadata.

DescribeStepResult
This output contains the description of the cluster step.

Ec2InstanceAttributes
Provides information about the EC2 instances in a cluster grouped by category. For example, key name, subnet ID, IAM instance profile, and so on.

HadoopJarStepConfig
A job flow step consisting of a JAR file whose main function will be executed. The main function submits a job for Hadoop to execute and waits for the job to finish or fail.

HadoopStepConfig
A cluster step consisting of a JAR file whose main function will be executed. The main function submits a job for Hadoop to execute and waits for the job to finish or fail.

StepFactory..::..HiveVersion
The available Hive versions.

Instance
Represents an EC2 instance provisioned as part of cluster.

InstanceGroup
This entity represents an instance group, which is a group of instances that have common purpose. For example, CORE instance group is used for HDFS.

InstanceGroupConfig
Configuration defining a new instance group.

InstanceGroupDetail
Detailed information about an instance group.

InstanceGroupModifyConfig
Modify an instance group size.

InstanceGroupStateChangeReason
The status change reason details for the instance group.

InstanceGroupStatus
The details of the instance group status.

InstanceGroupTimeline
The timeline of the instance group lifecycle.

InstanceStateChangeReason
The details of the status change reason for the instance.

InstanceStatus
The instance status details.

InstanceTimeline
The timeline of the instance lifecycle.

InternalServerErrorException
ElasticMapReduce exception

InternalServerException
ElasticMapReduce exception

InvalidRequestException
ElasticMapReduce exception

JobFlowDetail
A description of a job flow.

JobFlowExecutionStatusDetail
Describes the status of the job flow.

JobFlowInstancesConfig
A description of the Amazon EC2 instance running the job flow. A valid JobFlowInstancesConfig must contain at least InstanceGroups, which is the recommended configuration. However, a valid alternative is to have MasterInstanceType, SlaveInstanceType, and InstanceCount (all three must be present).

JobFlowInstancesDetail
Specify the type of Amazon EC2 instances to run the job flow on.

KeyValue
A key value pair.

ListBootstrapActionsRequest
Container for the parameters to the ListBootstrapActions operation. Provides information about the bootstrap actions associated with a cluster.

ListBootstrapActionsResponse
Returns information about the ListBootstrapActionsResult response and response metadata.

ListBootstrapActionsResult
This output contains the boostrap actions detail .

ListClustersRequest
Container for the parameters to the ListClusters operation. Provides the status of all clusters visible to this AWS account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.

ListClustersResponse
Returns information about the ListClustersResult response and response metadata.

ListClustersResult
This contains a ClusterSummaryList with the cluster details; for example, the cluster IDs, names, and status.

ListInstanceGroupsRequest
Container for the parameters to the ListInstanceGroups operation. Provides all available details about the instance groups in a cluster.

ListInstanceGroupsResponse
Returns information about the ListInstanceGroupsResult response and response metadata.

ListInstanceGroupsResult
This input determines which instance groups to retrieve.

ListInstancesRequest
Container for the parameters to the ListInstances operation. Provides information about the cluster instances that Amazon EMR provisions on behalf of a user when it creates the cluster. For example, this operation indicates when the EC2 instances reach the Ready state, when instances become available to Amazon EMR to use for jobs, and the IP addresses for cluster instances, etc.

ListInstancesResponse
Returns information about the ListInstancesResult response and response metadata.

ListInstancesResult
This output contains the list of instances.

ListStepsRequest
Container for the parameters to the ListSteps operation. Provides a list of steps for the cluster.

ListStepsResponse
Returns information about the ListStepsResult response and response metadata.

ListStepsResult
This output contains the list of steps.

ModifyInstanceGroup
ModifyInstanceGroupsRequest
Container for the parameters to the ModifyInstanceGroups operation. ModifyInstanceGroups modifies the number of nodes and configuration settings of an instance group. The input parameters include the new target instance count for the group and the instance group ID. The call will either succeed or fail atomically.

ModifyInstanceGroupsResponse
Returns information about the ModifyInstanceGroupsResult response and response metadata.

OnArrested
The action to take if your step is waiting for the instance group to start and it enters the Arrested state.

Fail - Fail the step. Wait - Continue waiting until the instance group is no longer arrested (requires manual intervention). Continue - Proceed onto the next step.


OnFailure
Action to take if there is a failure modifying your cluster composition. Fail - Fail the step. Continue - Proceed on to the next step.

PlacementType
The Amazon EC2 location for the job flow.

RemoveTagsRequest
Container for the parameters to the RemoveTags operation. Removes tags from an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see Tagging Amazon EMR Resources.

The following example removes the stack tag with value Prod from a cluster:


RemoveTagsResponse
Returns information about the RemoveTagsResult response and response metadata.

RemoveTagsResult
This output indicates the result of removing tags from a resource.

ResizeAction
ResizeJobFlowStep
This class provides some helper methods for creating a Resize Job Flow step as part of your job flow. The resize step can be used to automatically adjust the composition of your cluster while it is running. For example, if you have a large workflow with different compute requirements, you can use this step to automatically add a task instance group before your most compute intensive step.
CopyC#
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonElasticMapReduce emr = new AmazonElasticMapReduceClient(credentials);

HadoopJarStepConfig config = new ResizeJobFlowStep()
    .WithResizeAction(new ModifyInstanceGroup()
        .WithInstanceGroup("core")
        .WithInstanceCount(10))
    .WithResizeAction(new AddInstanceGroup()
        .WithInstanceGroup("task")
        .WithInstanceCount(10)
        .WithInstanceType("m1.small"))
    .WithOnArrested(OnArrested.Continue)
    .WithOnFailure(OnFailure.Continue)
    .ToHadoopJarStepConfig();

StepConfig resizeJobFlow = new StepConfig
{
    Name = "Resize job flow",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = config
};

RunJobFlowRequest request = new RunJobFlowRequest
{
    Name = "Resize job flow",
    Steps = new List<StepConfig> { resizeJobFlow },
    LogUri = "s3://log-bucket/",
    Instances = new JobFlowInstancesConfig
    {
        Ec2KeyName = "keypair",
        HadoopVersion = "0.20",
        InstanceCount = 5,
        KeepJobFlowAliveWhenNoSteps = true,
        MasterInstanceType = "m1.small",
        SlaveInstanceType = "m1.small"
    }
};

RunJobFlowResult result = emr.RunJobFlow(request).RunJobFlowResult;

RunJobFlowRequest
Container for the parameters to the RunJobFlow operation. RunJobFlow creates and starts running a new job flow. The job flow will run the steps specified. Once the job flow completes, the cluster is stopped and the HDFS partition is lost. To prevent loss of data, configure the last step of the job flow to store results in Amazon S3. If the JobFlowInstancesConfig
CopyC#
KeepJobFlowAliveWhenNoSteps
parameter is set to
CopyC#
TRUE
, the job flow will transition to the WAITING state rather than shutting down once the steps have completed.

For additional protection, you can set the JobFlowInstancesConfig

CopyC#
TerminationProtected
parameter to
CopyC#
TRUE
to lock the job flow and prevent it from being terminated by API call, user intervention, or in the event of a job flow error.

A maximum of 256 steps are allowed in each job flow.

If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide.

For long running job flows, we recommend that you periodically store your results.


RunJobFlowResponse
Returns information about the RunJobFlowResult response and response metadata.

RunJobFlowResult
The result of the RunJobFlow operation.

ScriptBootstrapActionConfig
Configuration of the script to run during a bootstrap action.

SetTerminationProtectionRequest
Container for the parameters to the SetTerminationProtection operation. SetTerminationProtection locks a job flow so the Amazon EC2 instances in the cluster cannot be terminated by user intervention, an API call, or in the event of a job-flow error. The cluster still terminates upon successful completion of the job flow. Calling SetTerminationProtection on a job flow is analogous to calling the Amazon EC2 DisableAPITermination API on all of the EC2 instances in a cluster.

SetTerminationProtection is used to prevent accidental termination of a job flow and to ensure that in the event of an error, the instances will persist so you can recover any data stored in their ephemeral instance storage.

To terminate a job flow that has been locked by setting SetTerminationProtection to

CopyC#
true
, you must first unlock the job flow by a subsequent call to SetTerminationProtection in which you set the value to
CopyC#
false
.

For more information, go to Protecting a Job Flow from Termination in the Amazon Elastic MapReduce Developer's Guide.


SetTerminationProtectionResponse
Returns information about the SetTerminationProtectionResult response and response metadata.

SetVisibleToAllUsersRequest
Container for the parameters to the SetVisibleToAllUsers operation. Sets whether all AWS Identity and Access Management (IAM) users under your account can access the specified job flows. This action works on running job flows. You can also set the visibility of a job flow when you launch it using the
CopyC#
VisibleToAllUsers
parameter of RunJobFlow. The SetVisibleToAllUsers action can be called only by an IAM user who created the job flow or the AWS account that owns the job flow.

SetVisibleToAllUsersResponse
Returns information about the SetVisibleToAllUsersResult response and response metadata.

Step
This represents a step in a cluster.

StepConfig
Specification of a job flow step.

StepDetail
Combines the execution state and configuration of a step.

StepExecutionStatusDetail
The execution state of a step.

StepFactory
This class provides helper methods for creating common Elastic MapReduce step types. To use StepFactory, you should construct it with the appropriate bucket for your region. The official bucket format is "<region>.elasticmapreduce", so us-east-1 would use the bucket "us-east-1.elasticmapreduce".

StepStateChangeReason
The details of the step state change reason.

StepStatus
The execution status details of the cluster step.

StepSummary
The summary of the cluster step.

StepTimeline
The timeline of the cluster step lifecycle.

StreamingStep
Class that makes it easy to define Hadoop Streaming steps.

See also: Hadoop Streaming

CopyC#
   AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
   AmazonElasticMapReduce emr = new AmazonElasticMapReduceClient(credentials);

HadoopJarStepConfig config = new StreamingStep {
    Inputs = new List<string> { "s3://elasticmapreduce/samples/wordcount/input" },
    Output = "s3://my-bucket/output/",
    Mapper = "s3://elasticmapreduce/samples/wordcount/wordSplitter.py",
    Reducer = "aggregate"
}.ToHadoopJarStepConfig();

StepConfig wordCount = new StepConfig {
    Name = "Word Count",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = config
};

RunJobFlowRequest request = new RunJobFlowRequest {
    Name = "Word Count",
    Steps = new List<StepConfig> { wordCount },
    LogUri = "s3://log-bucket/",
    Instances = new JobFlowInstancesConfig {
        Ec2KeyName = "keypair",
        HadoopVersion = "0.20",
        InstanceCount = 5,
        KeepJobFlowAliveWhenNoSteps = true,
        MasterInstanceType = "m1.small",
        SlaveInstanceType = "m1.small"
   }
};

   RunJobFlowResult result = emr.RunJobFlow(request).RunJobFlowResult;

SupportedProductConfig
The list of supported product configurations which allow user-supplied arguments. EMR accepts these arguments and forwards them to the corresponding installation script as bootstrap action arguments.

Tag
A key/value pair containing user-defined metadata that you can associate with an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clu\sters to track your Amazon EMR resource allocation costs. For more information, see Tagging Amazon EMR Resources.

TerminateJobFlowsRequest
Container for the parameters to the TerminateJobFlows operation. TerminateJobFlows shuts a list of job flows down. When a job flow is shut down, any step not yet completed is canceled and the EC2 instances on which the job flow is running are stopped. Any log files not already saved are uploaded to Amazon S3 if a LogUri was specified when the job flow was created.

The call to TerminateJobFlows is asynchronous. Depending on the configuration of the job flow, it may take up to 5-20 minutes for the job flow to completely terminate and release allocated resources, such as Amazon EC2 instances.


TerminateJobFlowsResponse
Returns information about the TerminateJobFlowsResult response and response metadata.