Class: AWS.SageMakerRuntime
- Inherits:
-
AWS.Service
- Object
- AWS.Service
- AWS.SageMakerRuntime
- Identifier:
- sagemakerruntime
- API Version:
- 2017-05-13
- Defined in:
- (unknown)
Overview
Constructs a service interface object. Each API operation is exposed as a function on service.
Service Description
The Amazon SageMaker runtime API.
Sending a Request Using SageMakerRuntime
var sagemakerruntime = new AWS.SageMakerRuntime();
sagemakerruntime.invokeEndpoint(params, function (err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Locking the API Version
In order to ensure that the SageMakerRuntime object uses this specific API, you can
construct the object by passing the apiVersion
option to the constructor:
var sagemakerruntime = new AWS.SageMakerRuntime({apiVersion: '2017-05-13'});
You can also set the API version globally in AWS.config.apiVersions
using
the sagemakerruntime service identifier:
AWS.config.apiVersions = {
sagemakerruntime: '2017-05-13',
// other service API versions
};
var sagemakerruntime = new AWS.SageMakerRuntime();
Constructor Summary collapse
-
new AWS.SageMakerRuntime(options = {}) ⇒ Object
constructor
Constructs a service object.
Property Summary collapse
-
endpoint ⇒ AWS.Endpoint
readwrite
An Endpoint object representing the endpoint URL for service requests.
Properties inherited from AWS.Service
Method Summary collapse
-
invokeEndpoint(params = {}, callback) ⇒ AWS.Request
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.
-
invokeEndpointAsync(params = {}, callback) ⇒ AWS.Request
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint in an asynchronous manner.
Inference requests sent to this API are enqueued for asynchronous processing.
-
invokeEndpointWithResponseStream(params = {}, callback) ⇒ AWS.Request
Invokes a model at the specified endpoint to return the inference response as a stream.
Methods inherited from AWS.Service
makeRequest, makeUnauthenticatedRequest, waitFor, setupRequestListeners, defineService
Constructor Details
new AWS.SageMakerRuntime(options = {}) ⇒ Object
Constructs a service object. This object has one method for each API operation.
Property Details
Method Details
invokeEndpoint(params = {}, callback) ⇒ AWS.Request
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.
For an overview of Amazon SageMaker, see How It Works.
Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.
Calls to InvokeEndpoint
are authenticated by using Amazon Web Services Signature Version 4. For information, see Authenticating Requests (Amazon Web Services Signature Version 4) in the Amazon S3 API Reference.
A customer's model containers must respond to requests within 60 seconds. The model itself can have a maximum processing time of 60 seconds before responding to invocations. If your model is going to take 50-60 seconds of processing time, the SDK socket timeout should be set to be 70 seconds.
invokeEndpointAsync(params = {}, callback) ⇒ AWS.Request
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint in an asynchronous manner.
Inference requests sent to this API are enqueued for asynchronous processing. The processing of the inference request may or may not complete before you receive a response from this API. The response from this API will not contain the result of the inference request but contain information about where you can locate it.
Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.
Calls to InvokeEndpointAsync
are authenticated by using Amazon Web Services Signature Version 4. For information, see Authenticating Requests (Amazon Web Services Signature Version 4) in the Amazon S3 API Reference.
invokeEndpointWithResponseStream(params = {}, callback) ⇒ AWS.Request
Invokes a model at the specified endpoint to return the inference response as a stream. The inference stream provides the response payload incrementally as a series of parts. Before you can get an inference stream, you must have access to a model that's deployed using Amazon SageMaker hosting services, and the container for that model must support inference streaming.
For more information that can help you use this API, see the following sections in the Amazon SageMaker Developer Guide:
-
For information about how to add streaming support to a model, see How Containers Serve Requests.
-
For information about how to process the streaming response, see Invoke real-time endpoints.
Before you can use this operation, your IAM permissions must allow the sagemaker:InvokeEndpoint
action. For more information about Amazon SageMaker actions for IAM policies, see Actions, resources, and condition keys for Amazon SageMaker in the IAM Service Authorization Reference.
Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.
Calls to InvokeEndpointWithResponseStream
are authenticated by using Amazon Web Services Signature Version 4. For information, see Authenticating Requests (Amazon Web Services Signature Version 4) in the Amazon S3 API Reference.