Predict - MachineLearning


Generates a prediction for the observation using the specified ML Model.

Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.

Request Syntax

{ "MLModelId": "string", "PredictEndpoint": "string", "Record": { "string" : "string" } }

Request Parameters

For information about the parameters that are common to all actions, see Common Parameters.

The request accepts the following data in JSON format.


A unique identifier of the MLModel.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 64.

Pattern: [a-zA-Z0-9_.-]+

Required: Yes


The predicted endpoint for the input.

Type: String

Required: Yes


A map of variable name-value pairs that represent an observation.

Type: String to string map

Required: Yes

Response Syntax

{ "Prediction": { "details": { "string" : "string" }, "predictedLabel": "string", "predictedScores": { "string" : number }, "predictedValue": number } }

Response Elements

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.


The output from a Predict operation:

  • Details - Contains the following attributes: DetailsAttributes.PREDICTIVE_MODEL_TYPE - REGRESSION | BINARY | MULTICLASS DetailsAttributes.ALGORITHM - SGD

  • PredictedLabel - Present for either a BINARY or MULTICLASS MLModel request.

  • PredictedScores - Contains the raw classification score corresponding to each label.

  • PredictedValue - Present for a REGRESSION MLModel request.

Type: Prediction object


For information about the errors that are common to all actions, see Common Errors.


An error on the server occurred when trying to process a request.

HTTP Status Code: 500


An error on the client occurred. Typically, the cause is an invalid input value.

HTTP Status Code: 400


The subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as DataSource.

HTTP Status Code: 400


The exception is thrown when a predict request is made to an unmounted MLModel.

HTTP Status Code: 400


A specified resource cannot be located.

HTTP Status Code: 400


The following is a sample request and response of the Predict operation.

This example illustrates one usage of Predict.

Sample Request

POST / HTTP/1.1 Host: <hostname from the GetMLModel response EndpointUrl object> x-amz-Date: <Date> Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=contenttype;date;host;user-agent;x-amz-date;x-amz-target;x-amzn-requestid,Signature=<Signature> User-Agent: <UserAgentString> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Connection: Keep-Alive X-Amz-Target: AmazonML_20141212.Predict {"MLModelId" : "exampleMLModelId", "Record" : { "ExampleData" : "exampleValue" }, "PredictEndpoint" : "<realtime endpoint from Amazon Machine Learning for exampleMLModelId>" }

Sample Response

HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Date: <Date> {"PredictedLabel" : "0" "PredictedScores" : { "0" : "0.446588516" }, "Details" : { "PredictiveModelType" : "BINARY", "Algorithm" : "SGD" } }

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: