AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Generates predictions for a group of observations. The observations to process exist
in one or more data files referenced by a DataSource
. This operation creates
a new BatchPrediction
, and uses an MLModel
and the data files referenced
by the DataSource
as information sources.
CreateBatchPrediction
is an asynchronous operation. In response to CreateBatchPrediction
,
Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction
status to PENDING
. After the BatchPrediction
completes, Amazon ML sets
the status to COMPLETED
.
You can poll for status updates by using the GetBatchPrediction operation and
checking the Status
parameter of the result. After the COMPLETED
status
appears, the results are available in the location specified by the OutputUri
parameter.
For .NET Core this operation is only available in asynchronous form. Please refer to CreateBatchPredictionAsync.
Namespace: Amazon.MachineLearning
Assembly: AWSSDK.MachineLearning.dll
Version: 3.x.y.z
public abstract CreateBatchPredictionResponse CreateBatchPrediction( CreateBatchPredictionRequest request )
Container for the necessary parameters to execute the CreateBatchPrediction service method.
Exception | Condition |
---|---|
IdempotentParameterMismatchException | A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request. |
InternalServerException | An error on the server occurred when trying to process a request. |
InvalidInputException | An error on the client occurred. Typically, the cause is an invalid input value. |
.NET Framework:
Supported in: 4.5 and newer, 3.5