Monitoring your equipment in real time - Amazon Lookout for Equipment

Monitoring your equipment in real time

After you create a model, you can use it to monitor your asset in real-time. To use your model to monitor your asset, you do the following.

  1. Create an Amazon S3 bucket to store the data that your asset outputs.

  2. Create a folder path location to store the output from the asset.

  3. Create an Amazon S3 bucket for the model to output its analysis of the data from your asset.

  4. Create a folder path location for the model to store the output of its analysis.

  5. Create a data pipeline from your asset to the S3 location that stores the output data.

  6. In Amazon Lookout for Equipment, specify the frequency that data is uploaded.

  7. Schedule the time period that your model performs inference on the data coming from your pipeline.

To create an inference schedule on the data that your model outputs to Amazon S3, use one of the following procedures.

To schedule an inference (console)

To schedule inference, you specify the model, the schedule, the S3 location of where the model is reading the data, and where it outputs the results of the inference.

  1. Sign in to AWS Management Console and open the Amazon Lookout for Equipment console at Amazon Lookout for Equipment console.

  2. Choose Models. Then choose the model that monitors your asset.

  3. Choose Schedule inference.

  4. For Inference schedule name, specify the name for the inference schedule.

  5. For Model, choose the model that is monitoring the data coming from your asset.

  6. For S3 location under Input data, specify the Amazon S3 location of the input data coming from the asset.

  7. For Data upload frequency, specify how often your asset sends the data to the S3 bucket.

  8. For S3 location under Output data, specify the S3 location to store the output of the inference results.

  9. For IAM role under Access Permissions, specify the IAM role that provides Amazon Lookout for Equipment with access to your data in Amazon S3.

  10. Choose Schedule inference.

The following example code uses the AWS SDK for Python (Boto3) to schedule an inference for your asset.

import boto3 import json import pprint import time from datetime import datetime from botocore.config import Config ​ config = Config(region_name = 'Region') ​ lookoutequipment = boto3.client(service_name="lookoutequipment", config=config) ​ # Specify a name for the inference scheduler INFERENCE_SCHEDULER_NAME = 'inference-scheduler-name' MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER = 'model-name' # You must specify values for the following variables to successfully schedule an inference. # DATA_UPLOAD_FREQUENCY – The frequency that the data from the asset uploads to the Amazon S3 data containing the inference data. The valid values are PT5M, PT10M, PT30M, and PT1H # INFERENCE_DATA_SOURCE_BUCKET – The S3 bucket that stores the inference data coming from your asset. # INFERENCE_DATA_SOURCE_PREFIX – The S3 prefix that helps you access the inference data coming from your asset. # INFERENCE_DATA_OUTPUT_BUCKET – The S3 bucket that stores the results of the inference. # INFERENCE_DATA_OUTPUT_PREFIX – The S3 prefix that helps you access the results of the inference. # ROLE_ARN_FOR_INFERENCE – The IAM role that gives Amazon Lookout for Equipment read permissions for Amazon S3. ​ # You can specify values for the following optional variables. # DATA_DELAY_OFFSET_IN_MINUTES – The number of minutes to account for a delay in uploading the data to Amazon S3 from your data pipeline. # INPUT_TIMEZONE_OFFSET – The default timezone for running inference is in UTC. You can offset the default timezone in increments of 30 minutes. This offset only applies to the file name. If you choose to use the offset, you must have the timestamps for the sensor in UTC as well. The valid values include +00:00, +00:30, -01:00, ... +11:30, +12:00, -00:00, -00:30, -01:00, ... -11:30, -12:00. # TIMESTAMP_FORMAT – You can specify how the model outputs the timestamp in the results of the inference. The valid values are `EPOCH`, `yyyy-MM-dd-HH-mm-ss` or `yyyyMMddHHmmss`. # COMPONENT_TIMESTAMP_DELIMITER – Specifies the character used to separate entries in the input data. Default delimiter is - (hyphen). The valid values are `-`, `_` or ` `. ​ DATA_DELAY_OFFSET_IN_MINUTES = None ​ INPUT_TIMEZONE_OFFSET = None ​ COMPONENT_TIMESTAMP_DELIMITER = None ​ TIMESTAMP_FORMAT = None ​ # Create an inference scheduler. scheduler_name = INFERENCE_SCHEDULER_NAME model_name = MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER INFERENCE_DATA_SOURCE_BUCKET = 'data-source-bucket' ​ INFERENCE_DATA_SOURCE_PREFIX = 'data-source-prefix' ​ INFERENCE_DATA_OUTPUT_BUCKET = 'data-output-bucket' ​ INFERENCE_DATA_OUTPUT_PREFIX = 'data-output-prefix' ​ ROLE_ARN_FOR_INFERENCE = ROLE_ARN ​ DATA_UPLOAD_FREQUENCY = 'data-upload-frequency' ​ create_inference_scheduler_request = { 'ModelName': model_name, 'InferenceSchedulerName': scheduler_name, 'DataUploadFrequency': DATA_UPLOAD_FREQUENCY, 'RoleArn': ROLE_ARN_FOR_INFERENCE, } ​ if DATA_DELAY_OFFSET_IN_MINUTES is not None: create_inference_scheduler_request['DataDelayOffsetInMinutes'] = DATA_DELAY_OFFSET_IN_MINUTES ​ # Set up data input configuration. inference_input_config = dict() inference_input_config['S3InputConfiguration'] = dict( [ ('Bucket', INFERENCE_DATA_SOURCE_BUCKET), ('Prefix', INFERENCE_DATA_SOURCE_PREFIX) ] ) if INPUT_TIMEZONE_OFFSET is not None: inference_input_config['InputTimeZoneOffset'] = INPUT_TIMEZONE_OFFSET if COMPONENT_TIMESTAMP_DELIMITER is not None or TIMESTAMP_FORMAT is not None: inference_input_name_configuration = dict() if COMPONENT_TIMESTAMP_DELIMITER is not None: inference_input_name_configuration['ComponentTimestampDelimiter'] = COMPONENT_TIMESTAMP_DELIMITER if TIMESTAMP_FORMAT is not None: inference_input_name_configuration['TimestampFormat'] = TIMESTAMP_FORMAT inference_input_config['InferenceInputNameConfiguration'] = inference_input_name_configuration create_inference_scheduler_request['DataInputConfiguration'] = inference_input_config ​ # Set up output configuration. inference_output_configuration = dict() inference_output_configuration['S3OutputConfiguration'] = dict( [ ('Bucket', INFERENCE_DATA_OUTPUT_BUCKET), ('Prefix', INFERENCE_DATA_OUTPUT_PREFIX) ] ) create_inference_scheduler_request['DataOutputConfiguration'] = inference_output_configuration ​ ######################################################## # Invoke create_inference_scheduler ######################################################## ​ create_scheduler_response = lookoutequipment.create_inference_scheduler(**create_inference_scheduler_request) ​ print("\n\n=====CreateInferenceScheduler Response=====\n") pp = pprint.PrettyPrinter(depth=5) pp.pprint(create_scheduler_response) print("\n=====End of CreateInferenceScheduler Response=====") ​ ######################################################## # Wait until RUNNING ######################################################## scheduler_status = create_scheduler_response['Status'] print("=====Polling Inference Scheduler Status=====\n") print("Model Status: " + scheduler_status) while scheduler_status == 'PENDING': time.sleep(5) describe_scheduler_response = lookoutequipment.describe_inference_scheduler(InferenceSchedulerName=INFERENCE_SCHEDULER_NAME) scheduler_status = describe_scheduler_response['Status'] print("Scheduler Status: " + scheduler_status) print("\n=====End of Polling Inference Scheduler Status=====")