CreateInferenceScheduler - Amazon Lookout for Equipment

CreateInferenceScheduler

Creates a scheduled inference. Scheduling an inference is setting up a continuous real-time inference plan to analyze new measurement data. When setting up the schedule, you provide an S3 bucket location for the input data, assign it a delimiter between separate entries in the data, set an offset delay if desired, and set the frequency of inferencing. You must also provide an S3 bucket location for the output data.

Request Syntax

{ "ClientToken": "string", "DataDelayOffsetInMinutes": number, "DataInputConfiguration": { "InferenceInputNameConfiguration": { "ComponentTimestampDelimiter": "string", "TimestampFormat": "string" }, "InputTimeZoneOffset": "string", "S3InputConfiguration": { "Bucket": "string", "Prefix": "string" } }, "DataOutputConfiguration": { "KmsKeyId": "string", "S3OutputConfiguration": { "Bucket": "string", "Prefix": "string" } }, "DataUploadFrequency": "string", "InferenceSchedulerName": "string", "ModelName": "string", "RoleArn": "string", "ServerSideKmsKeyId": "string", "Tags": [ { "Key": "string", "Value": "string" } ] }

Request Parameters

The request accepts the following data in JSON format.

ClientToken

A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 256.

Pattern: \p{ASCII}{1,256}

Required: Yes

DataDelayOffsetInMinutes

A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

Type: Long

Valid Range: Minimum value of 0. Maximum value of 60.

Required: No

DataInputConfiguration

Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

Type: InferenceInputConfiguration object

Required: Yes

DataOutputConfiguration

Specifies configuration information for the output results for the inference scheduler, including the S3 location for the output.

Type: InferenceOutputConfiguration object

Required: Yes

DataUploadFrequency

How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

Type: String

Valid Values: PT5M | PT10M | PT15M | PT30M | PT1H

Required: Yes

InferenceSchedulerName

The name of the inference scheduler being created.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 200.

Pattern: ^[0-9a-zA-Z_-]{1,200}$

Required: Yes

ModelName

The name of the previously trained ML model being used to create the inference scheduler.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 200.

Pattern: ^[0-9a-zA-Z_-]{1,200}$

Required: Yes

RoleArn

The Amazon Resource Name (ARN) of a role with permission to access the data source being used for the inference.

Type: String

Length Constraints: Minimum length of 20. Maximum length of 2048.

Pattern: arn:aws(-[^:]+)?:iam::[0-9]{12}:role/.+

Required: Yes

ServerSideKmsKeyId

Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 2048.

Pattern: ^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$

Required: No

Tags

Any tags associated with the inference scheduler.

Type: Array of Tag objects

Array Members: Minimum number of 0 items. Maximum number of 200 items.

Required: No

Response Syntax

{ "InferenceSchedulerArn": "string", "InferenceSchedulerName": "string", "Status": "string" }

Response Elements

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.

InferenceSchedulerArn

The Amazon Resource Name (ARN) of the inference scheduler being created.

Type: String

Length Constraints: Minimum length of 20. Maximum length of 2048.

Pattern: arn:aws(-[^:]+)?:lookoutequipment:[a-zA-Z0-9\-]*:[0-9]{12}:inference-scheduler\/.+

InferenceSchedulerName

The name of inference scheduler being created.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 200.

Pattern: ^[0-9a-zA-Z_-]{1,200}$

Status

Indicates the status of the CreateInferenceScheduler operation.

Type: String

Valid Values: PENDING | RUNNING | STOPPING | STOPPED

Errors

AccessDeniedException

The request could not be completed because you do not have access to the resource.

HTTP Status Code: 400

ConflictException

The request could not be completed due to a conflict with the current state of the target resource.

HTTP Status Code: 400

InternalServerException

Processing of the request has failed because of an unknown error, exception or failure.

HTTP Status Code: 500

ResourceNotFoundException

The resource requested could not be found. Verify the resource ID and retry your request.

HTTP Status Code: 400

ServiceQuotaExceededException

Resource limitations have been exceeded.

HTTP Status Code: 400

ThrottlingException

The request was denied due to request throttling.

HTTP Status Code: 400

ValidationException

The input fails to satisfy constraints specified by Amazon Lookout for Equipment or a related AWS service that's being utilized.

HTTP Status Code: 400

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: