AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Generates a prediction for the observation using the specified ML Model
.
Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginPredict and EndPredict.
Namespace: Amazon.MachineLearning
Assembly: AWSSDK.MachineLearning.dll
Version: 3.x.y.z
public virtual Task<PredictResponse> PredictAsync( PredictRequest request, CancellationToken cancellationToken )
Container for the necessary parameters to execute the Predict service method.
A cancellation token that can be used by other objects or threads to receive notice of cancellation.
Exception | Condition |
---|---|
InternalServerException | An error on the server occurred when trying to process a request. |
InvalidInputException | An error on the client occurred. Typically, the cause is an invalid input value. |
LimitExceededException | The subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as DataSource. |
PredictorNotMountedException | The exception is thrown when a predict request is made to an unmounted MLModel. |
ResourceNotFoundException | A specified resource cannot be located. |
.NET Core App:
Supported in: 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5